Cisco pyATS: Network Test and
Automation Solution
John Capobianco, Dan Wade
A NOTE FOR EARLY RELEASE READERS
With Early Release eBooks, you get books in their earliest form
—the author’s raw and unedited content as they write—so you
can take advantage of these technologies long before the official
release of these titles.
Please note that the GitHub repo will be made active closer to
publication.
If you have comments about how we might improve the content
and/or examples in this book, or if you notice missing material
within this title, please reach out to Pearson at
[email protected]Cisco Press
221 River Street
Hoboken, NJ 07030 USA
Cisco pyATS: Network Test and Automation Solution
John Capobianco, Dan Wade
Copyright© 2025 Cisco Systems, Inc.
Cisco Press logo is a trademark of Cisco Systems, Inc.
Published by:
Cisco Press
221 River Street
Hoboken, NJ 07030 USA
All rights reserved. No part of this book may be reproduced or
transmitted in any form or by any means, electronic or
mechanical, including photocopying, recording, or by any
information storage and retrieval system, without written
permission from the publisher, except for the inclusion of brief
quotations in a review.
Printed in the United States of America 1 2 3 4 5 6 7 8 9 0
First Printing
Library of Congress Cataloging-in-Publication Number:
ISBN-13: 9780138031671
ISBN-10:
Warning and Disclaimer
This book is designed to provide information about all aspects
of Cisco pyATS. Every effort has been made to make this book as
complete and as accurate as possible, but no warranty or fitness
is implied.
The information is provided on an "as is" basis. The authors,
Cisco Press, and Cisco Systems, Inc. shall have neither liability
nor responsibility to any person or entity with respect to any
loss or damages arising from the information contained in this
book or from the use of the discs or programs that may
accompany it.
The opinions expressed in this book belong to the authors and
are not necessarily those of Cisco Systems, Inc.
Feedback Information
At Cisco Press, our goal is to create in-depth technical books of
the highest quality and value. Each book is crafted with care
and precision, undergoing rigorous development that involves
the unique expertise of members from the professional
technical community.
Readers’ feedback is a natural continuation of this process. If
you have any comments regarding how we could improve the
quality of this book, or otherwise alter it to better suit your
needs, you can contact us through email at
[email protected]. Please make sure to include the book
title and ISBN in your message.
We greatly appreciate your assistance.
Editor-in-Chief
Director ITP Product Management
Brett Bartow
Executive Editor
Nancy Davis
Managing Editor
Sandra Schroeder
Development Editor
Christopher Cleveland
Project Editor
Mandie Frank
Copy Editor
Technical Editors
Stuart Clark, Charles Greenway
Editorial Assistant
Cindy Teeters
Book Designer
Cover Designer
Composition
Indexer
Proofreader
Trademark Acknowledgments
All terms mentioned in this book that are known to be
trademarks or service marks have been appropriately
capitalized. Cisco Press or Cisco Systems, Inc. cannot attest to
the accuracy of this information. Use of a term in this book
should not be regarded as affecting the validity of any
trademark or service mark.
About the Authors
Dan Wade is a Network and Infrastructure Automation Practice
Lead at BlueAlly. As part of the Solutions Strategy team at
BlueAlly, he is responsible for developing network and
infrastructure automation solutions and enabling the sales and
consulting teams on delivery of the developed solutions. The
solutions may fall into the categories of infrastructure
provisioning, configuration management, network source of
truth, network observability, and, of course, automated testing
and validation. Previous to this role, Dan worked as a
consulting engineer with a focus on network automation.
Dan has over 7 years of experience in network automation
having worked with automation tooling and frameworks such
as Ansible and Terraform, and Python libraries including
Nornir, Netmiko, NAPALM, Scrapli, and Python SDKs. Dan has
been working with pyATS and the pyATS library (Genie) for the
past 4-5 years and inspired him to embrace automated network
testing. In 2021, Dan contributed to the genieparser library with
a new IOS XE parser. He also enjoys creating his own open-
source projects focused on network automation. Dan holds two
professional-level certifications from Cisco, including the Cisco
DevNet Professional and CCNP Enterprise.
Dan enjoys sharing knowledge and experience on social media
with blog posts, YouTube videos, and participating in podcast
episodes. He’s passionate about helping others explore network
automation and advocating how network automation can
empower, not replace, network engineers. You can find him on
social media @devnetdan.
John Capobianco has a dynamic and multifaceted career in IT
and networking, marked by significant contributions to both
the public and private sectors. Beginning his journey in the field
as an aluminium factory worker, Capobianco’s resilience and
dedication propelled him through college, earning a diploma as
a Computer Programmer Analyst from St. Lawrence College.
This initial phase set the foundation for a career underpinned
by continuous learning and achievement, evident from his
array of certifications, including multiple Cisco Certifications as
well as Microsoft certification.
Transitioning from his early educational accomplishments,
Capobianco’s professional life has spanned over two decades,
featuring roles that showcased his technical prowess and
strategic vision. His work has significantly impacted both the
public and private sectors, including notable positions at the
Parliament of Canada, where he served as a Senior IT Planner
and Integrator, and at Cisco, where he began as a Developer
Advocate. These roles have been instrumental in shaping his
perspective on network management and security, leading to
his recent advancement into a Technical Leader role in
Artificial Intelligence for Cisco Secure, reflecting his
commitment to integrating AI technologies for enhancing
network security solutions.
In addition to his professional and technical achievements,
Capobianco is also an accomplished author. His book,
"Automate Your Network: Introducing the Modern Approach to
Enterprise Network Management," published in March 2019,
encapsulates his philosophy towards leveraging automation for
efficient and effective network management. He is dedicated to
lifelong learning and professional development, supported by a
solid foundation in education and a broad spectrum of
certifications and now aims to share his knowledge with others
through this book; YouTube videos; and blogs. John can be
found on X using @john_capobianco
About the Technical Reviewer
Stuart Clark is a Senior Developer Advocate, public speaker,
author and DevNet Expert #2022005. Stuart is a sought-after
speaker, frequently gracing the stages of industry conferences
worldwide presenting on his expertise in programmability and
DevOps methodologies. Passionate about fostering knowledge
sharing, he actively creates community content and leads
developer communities, empowering others to thrive in the
ever-evolving tech landscape. In his previous role as a network
engineer, he became obsessed with network automation and
became a developer advocate for network automation. He
contributed to the Cisco DevNet exams and was part of one of
the SME teams which created, designed, and built the Cisco
Certified DevNet Expert. He lives in Lincoln, England, with his
wife, Natalie, and their son, Maddox. He plays guitar and rocks
an impressive two-foot beard while drinking coffee. You can
find him on social media @bigevilbeard.
Charles Greenaway, CCIE No. 11226 (R&S, Security,
Datacenter), is a Field CTO for BT (https://www.bt.com/about/bt).
With more than 20 years of data networking experience across
LAN/WAN/DC in multiple industry sectors across the globe, he
ensures that his customers’ use of technology is aligned with
their business goals whilst developing and implementing the
technology strategy.
His current focus is helping customers transition towards
Global Fabric technologies that provide software-defined
underlay and overlay networking to underpin secure multi-
cloud connectivity.
As a member of the DevNet 500 and the Cisco Champions
programme, Charles promotes the use of programmability and
automation to make it accessible to engineers at all levels of
skill and experience. He has developed technical content
through Greencodemedia Limited and in the public domain at
events such as Cisco DevNet Create. Charles is a graduate of
Loughborough University and holds a BSc in Computer Science
and lives in the United Kingdom.
Dedications
Dan: I would like to dedicate this book to my wonderful wife
Hailey and my two amazing kids. They are my foundation and
have been patient during the entire writing process. I’d also like
to dedicate this book to my parents, who have continued to
push me to accomplish whatever I wanted in life. I love you all!
John: This book is dedicated to my wife and partner of over 25
years, Michelle. Without her support and encouragement I
would likely still be driving a forklift. J’taime le.
Acknowledgments
Dan: First, I’d like to thank the Art of Network Engineering
(AONE) community, specifically AJ Murray, for encouraging me
to begin blogging and creating my own brand. I can confidently
say there would be no DevNet Dan without them! I’d also like to
thank NetCraftsmen for taking a chance on me in the beginning
of my consulting career. It was my first time working in the
consulting space and they’ve consistently guided me to success.
Thank you Terry, Shaffeel, Robert, Bill, Joel, and John for
continuing to encourage and push me to grow professionally.
I would like to thank the Nancy Davis for giving me the
opportunity to pursue this project. She continues to encourage
and support me to pursue creative opportunities. A big thank
you to Chris Cleveland, development editor, for providing the
best support developing the book and to Stuart and Charles for
their unbiased and honest technical review of the book and its
contents. I’d also like to thank my wonderful co-author John.
He’s been a pleasure to work with and write this phenomenal
book!
Finally, thanks to all the content creators, trainers and authors
who have influenced my narration style and contributed to my
constant learning of networking and software development.
John: I would first like to acknowledge what a pivotal role St.
Lawrence College has played in my life; first as a student; then
as a professor. Thank you to Donna Graves, Janis Michael, and
of course, rest in peace, Carl Davis. To everyone I’ve ever
worked with in my career from Empire Life to the Parliament
of Canada; thank you. I am very proud of what we
accomplished together and for the confidence you had in me to
build, support, evolve, and, ultimately, automate your networks.
To Cisco for embracing me completely as one of your own; I’ve
never had such a supportive culture.
To everyone involved with the publishing of this book from
Nancy Davis and Chris Cleveland and the Pearson team; to our
editors Stuart Clarke and Charles Greenaway for their
dedication to the project; and last but not least, to Dan Wade for
co-authoring the book. From joint live streams and from
collaborating on this book I am really proud to call you a friend.
And, to JB and Siming, for inviting me to a private pyATS crash
course. You have both been so giving and provided me real
guidance and direction and turned me onto Python. Thank you
both.
Contents at a Glance
Chapter 1 Foundations of NetDevOps
Chapter 2 Installing and Upgrading pyATS
Chapter 3 Testbeds
Chapter 4 AETest Test Infrastructure
Chapter 5 pyATS Parsers
Chapter 6 Test-Driven Development
Chapter 7 Automated Network Documentation
Chapter 8 Automated Network Testing
Chapter 9 pyATS Triggers and Verifications
Chapter 10 Automated Configuration Management
Chapter 11 Network Snapshots
Chapter 12 Recordings, Playbacks, and Mock Devices
Chapter 13 Working with Application Programming
Interfaces
Chapter 14 Parallel Calls (pCalls)
Chapter 15 pyATS Clean
Chapter 16 pyATS Blitz
Chapter 17 Chatbots with WebEx
Chapter 18 Running pyATS as a Container
Chapter 19 pyATS Health Check
Chapter 20 XPRESSO
Chapter 21 CI/CD with pyATS
Chapter 22 ROBOT Framework
Chapter 23 Leveraging Artificial Intelligence in pyATS
Appendix A Writing Your Own Parser
Appendix B Secret Strings
Contents
Chapter 1. Foundations of NetDevOps
Traditional Network Operations
Software Development Methodologies
NetDevOps
Comparing Network Automation Tools
The Modern Network Engineer Toolkit
CI/CD
Summary
References
Chapter 2. Installing and Upgrading pyATS
Installing pyATS
Upgrading pyATS
Troubleshooting pyATS
Summary
Chapter 3. Testbeds
What Is YAML?
What Is a Testbed?
Device Connection Abstractions
Testbed Validation
Dynamic Testbeds
Intent-based Networking with Extended Testbeds
Summary
Chapter 4. AETest Test Infrastructure
Getting Started with AEtest
Testscript Structure
AEtest Object Model
Runtime Behavior
Test Results
Processors
Data-Driven Testing
Test Parameters
Running Testscripts
Testscript Flow Control
Reporting
Debugging
Summary
Chapter 5. pyATS Parsers
Vendor Agnostic Automation
pyATS learn
pyATS Parsers
Parsing at the CLI
Parsing with Python
Dictionary Query
Differentials
Summary
Chapter 6. Test-Driven Development
Introduction to Test-Driven Development
Applying Test-Driven Development to Network
Automation
Introduction to pyATS
The pyATS Framework
Summary
References
Chapter 7. Automated Network Documentation
Introduction to pyATS Jobs
Running pyATS Jobs from the CLI
pyATS Job CLI Logs
pyATS Logs HTML Viewer
Jinja2 Templating
Business-Ready Documents
Summary
Chapter 8. Automated Network Testing
An Approach to Network Testing
Software Version Testing
Interface Testing
Neighbor Testing
Reachability Testing
Intent-Validation Testing
Feature Testing
Summary
Chapter 9. PyATS Triggers and Verifications
Genie Objects
Genie Harness
Verifications
Triggers
Trigger and Verification Example
Summary
Chapter 10. Automated Configuration Management
Intent-Based Network Configuration
Generating Configurations with pyATS
Configuring Devices with pyATS
Summary
Chapter 11. Network Snapshots
Network Profiling
Comparing Network State
Polling Expected State
Robot Framework with Genie
Summary
References
Chapter 12. Recordings, Playbacks, and Mock Devices
Recording pyATS jobs
Playback Recordings
Mock Devices
Mock Device CLI
Summary
Chapter 13. Working with Application Programmable
Interfaces
pyATS APIs
REST Connector
YANG Connector
gNMI
Summary
Chapter 14. Parallel Call (pcall)
Scaling Performance
Parallel Call (pcall)
Performance Comparison
Summary
Chapter 15. pyATS Clean
Getting Started
Clean YAML
Clean Execution
Developing Clean Stages
Summary
Chapter 16. pyATS Blitz
Blitz YAML
Blitz Features
Blitz Usage
Blitz Development
Useful Tips
Summary
Chapter 17. Chatbots with WebEx
Integrating pyATS with WebEx
pyATS Job Integration
pyATS Health Check Integration
Adaptive Cards
Customized Job Notifications
Summary
Chapter 18. Running pyATS as a Container
Introduction to Containers
pyATS Official Docker Container
pyATS Image Builder
Building a pyATS Image from Scratch
Summary
Chapter 19. pyATS Health Check
Health Checks
Custom Health Checks
Health Check Usage
Summary
Chapter 20. XPRESSO
Installing XPRESSO
Getting Started with XPRESSO
pyATS Job into XPRESSO
Summary
Chapter 21. CI/CD with pyATS
What is CI/CD?
CI/CD In NetDevOps
NetDevOps Scenario
NetDevOps In Action
What’s Next?
Summary
Chapter 22. ROBOT Framework
What is the ROBOT Framework?
Getting Started with ROBOT Framework
ROBOT Integration with pyATS
Summary
Chapter 23. Leveraging Artificial Intelligence in pyATS
OpenAI API
Retrieval Augmented Generation with Langchain
Rapid Prototyping with Streamlit
Summary
Appendix A. Writing Your Own Parser
Writing Your Own Parser
References and Recommended Readings
Appendix B. Secret Strings
How to Secure Your Secret Strings
Multiple Representers
Representer Classes
Foreword
In late 2013, I found myself seated at the end of a restaurant
table in San Jose, celebrating the success of our latest Tcl-
language based test automation feature release. Tibor Fabry-
Asztalos, our visionary senior director, raised his glass in a
toast: “We need to look at our next goal. It’s time to transition to
Python-based automation.” With that, he gazed to his left,
where coincidentally his chain of reports were seated in order -
and each person looked further to their left, until there was just
me, the final link in the chain, entrusted to shoulder that
responsibility.
And so, pyATS was born.
After two decades of Tcl/Expect based automation and testing in
Cisco, the call for a more modern, natively object-orienting
infrastructure was undeniable - one that could scale forward,
lower the barrier for adoption, and attract new talents as Tcl
expertise waned.
2024 marks the 10-year anniversary for pyATS. Originally
introduced as an internal testing solution, its 2017 public
launch through Cisco DevNet marked a definitive,
transformative moment. It enabled closer collaboration
between Cisco engineering, customers and their network
engineers, unlocking a plethora of opportunities and use cases.
Around that time, NetDevOps was at its infancy, and network
engineers were seeking for their next career breakthrough.
pyATS was ready just around the corner.
Rarely does one find themselves at the helm of opportunity to
shape the next decade of network automation, a chance to
redefine the landscape of network testing, and influence the
careers of countless network engineers. It’s been an exciting
journey, filled with dedication, perseverance, and innovation.
Most importantly - we took pride in what we have created and
accomplished.
Looking back, could we have done better? Absolutely. Along the
way, mistakes were made, and compromises became necessary.
But as someone special to me once said, “every decision you
make in life [sic] is always the best that you could, based on the
limited knowledge you had at that time.” We, the pyATS
development team, gave it its best, and the community echoed
positively.
It’s been a privilege and an honor to be able to stand in the
precipice of a new chapter in the history of test automation at
Cisco. A heartfelt thank you goes to our team, our community
members, and everyone who supported us along the way. As
pyATS continues to evolve, my sincerest wishes for its
continued momentum and enduring legacy.
>>> from pyats import awesome
Siming Yuan, pyATS Founder, Architect & Lead Developer
Reflecting on the inception of pyATS, it’s astounding to see the
journey from an ambitious project within Cisco engineering to
a cornerstone of network automation. Born from the challenges
we faced daily, it quickly grew beyond the initial scope,
demonstrating the power of innovative solutions in a rapidly
evolving field. I am filled with gratitude for the brilliant minds I
worked with and the community that has grown around these
tools. Your enthusiasm and support have been the driving force
behind its success.
Now, looking back, I see the legacy of pyATS not just in the
technical achievements, but in the community and
collaboration it fostered. It’s been a privilege to contribute to
this chapter of network engineering, and I am proud of what
we accomplished together.
A special thanks to all the pyATS team members who have
worked on it. It wouldn’t have been possible without you.
Thank you to everyone who has joined us on this remarkable
journey. Your contributions have made all the difference.
Jean-Benoit Aubin
Lead Developer & Architect, pyATS
Introduction
This book was written to explore the powerful capabilities of
automated network testing with the Cisco pyATS framework.
Network testing and validation is a low risk, yet powerful,
domain in the network automation space. This book is
organized to address the multiple features of pyATS and the
pyATS library (Genie). Readers will learn why network testing
and validation is important, how pyATS can be leveraged to run
tests against network devices, and how to integrate pyATS into
larger workflows using CI/CD pipelines and artificial
intelligence (AI).
Goals and Objectives
This book touches on many aspects of network automation,
including device configuration, parsing, APIs, parallel
programming, artificial intelligence, and, of course, automated
network testing. The intended audience for this book is network
professionals and software developers wanting to learn more
about the pyATS framework and the benefits of automated
network testing. The audience should be comfortable with
Python, as pyATS is built with the Python programming
language.
Candidates who are looking to learn pyATS as it relates to the
Cisco DevNet Expert Lab exam will find the use cases and
examples throughout the book valuable for exam preparation.
How This Book Is Organized
Chapter 1, Foundations of NetDevOps: This chapter
introduces NetDevOps, outlining its benefits and how it merges
with software development methodologies to enhance network
automation. We compare key automation tools and detail the
modern network engineer’s toolkit, setting the stage for
applying NetDevOps in practice.
Chapter 2, Installing and Upgrading pyATS: The chapter
shows how to install and upgrade pyATS and the pyATS library
using Python package management tools and built-in pyATS
commands.
Chapter 3, Testbed: This chapter covers YAML’s basics,
explores the concept of a testbed, and examines device
connection abstractions. We discuss methods for testbed
validation, the creation of dynamic testbeds, and how intent-
based networking integrates with extended testbeds, providing
a roadmap for their practical application.
Chapter 4, AETest Test Infrastructure: This chapter is one of
the key chapters in this book. It goes in-depth and reviews the
different components that make up AEtest, the testing
infrastructure that is the core of pyATS. Everything from
defining testcases and individual test sections to running
testscripts is covered in this chapter. After reading this chapter,
you’ll understand how to introduce test inputs and parameters,
define test sections, control the flow of test execution, and
review test results with the built-in reporting features.
Chapter 5, pyATS Parsers: This chapter delves into pyATS
parsers, emphasizing vendor-neutral automation strategies. It
covers the essentials of pyATS learn and parse features,
techniques for CLI parsing, and parsing with Python.
Additionally, we explore how to perform dictionary queries and
analyze differentials, equipping readers with skills for effective
network data handling.
Chapter 6, Test-Driven Development: This chapter introduces
Test-Driven Development (TDD), its application in network
automation, and an overview of pyATS. It further explores the
pyATS framework, setting the foundation for incorporating TDD
practices in network management.
Chapter 7, Automated Network Documentation: This chapter
explores automated network documentation, beginning with an
introduction to pyATS jobs. It details executing pyATS jobs from
the CLI, interpreting CLI logs, and utilizing the pyATS logs
HTML viewer for enhanced analysis. We also delve into Jinja2
templating for document creation, culminating in the
generation of business-ready documents.
Chapter 8, Automated Network Testing: This pivotal chapter
delves into automated network testing, the core focus of the
book. It outlines a strategic approach to network testing,
including software version testing, interface testing, neighbor
testing, and reachability testing. Additionally, we explore intent-
validation testing and feature testing, essential components for
ensuring network reliability and performance.
Chapter 9, pyATS Triggers and Verifications: This chapter
reviews how to use triggers and verifications using the Genie
Harness. Triggers and verifications allow you to build dynamic
testcases, with a low code approach, that can change with your
network requirements.
Chapter 10, Automated Configuration Management: In this
chapter we will look at how to generate intent-based
configuration using data models, Jinja2 templates, and Genie
Conf objects. In addition to generating configurations, we will
see how to push configuration to network devices using a file
transfer server, Genie Conf objects, and pyATS device APIs.
Chapter 11, Network Snapshots: This chapter looks at how to
profile the network by creating and comparing snapshots of the
network. Network snapshots can be helpful when
troubleshooting a network issue or just learning about the
network’s operating state at a point in time.
Chapter 12, Recording Playbacks, and Mock Devices: This
chapter introduces pyATS recordings, covering the recording of
pyATS jobs and the playback of these recordings. It explains
how to create mock devices and simulate device interactions
through the mock device CLI, offering practical insights into
testing without the need for live network equipment.
Chapter 13, Working with Application Programming
Interfaces: This chapter focuses on working with pyATS APIs,
detailing the pyATS API framework, REST connector, YANG
connector, and gNMI. It provides insights into how these tools
and protocols can be utilized for efficient network automation
and management through API interactions.
Chapter 14, Parallel Calls (pCalls): Testing in pyATS can be
sped up using parallel processing (parallelism). In this chapter,
we review the differences between parallelism and
concurrency using asynchronous programming. Parallel call
(pCall) in pyATS enables parallel execution and is built on the
multiprocessing package in the Python standard library.
Chapter 15, pyATS Clean: In this chapter, you will see how
pyATS can reset devices during or after testing using the pyATS
Clean feature.
Chapter 16, pyATS Blitz: In this chapter, we will review pyATS
Blitz, which creates a low code approach to building pyATS
testcases using YAML syntax.
Chapter 17, Chatbots with Webex: This chapter explores
integrating pyATS with WebEx, including pyATS job and health
check integrations. It delves into using Adaptive Cards within
WebEx for interactive content and outlines methods for setting
up customized job notifications, enhancing communication and
monitoring in network operations.
Chapter 18, Running pyATS as a Container: This chapter
introduces the concept of containers, focusing on the pyATS
official Docker container. It guides through the pyATS image
builder and details the process of building a pyATS image from
scratch, offering a comprehensive approach to deploying pyATS
as a containerized application.
Chapter 19, pyATS Health Check: This chapter dives into the
different health checks that run to ensure devices under testing
are operating correctly. Built-in health checks include checking
CPU, memory, logging, and the presence of core dump files to
ensure devices haven’t malfunctioned or crashed during
testing.
Chapter 20, XPRESSO: This section covers pyATS XPRESSO,
starting with installation instructions. It provides a beginner’s
guide to getting started with XPRESSO and details on running
pyATS jobs within the XPRESSO environment, facilitating an
easy entry into utilizing this powerful tool.
Chapter 21, CI/CD with pyATS: The concept of CI/CD is a
common practice in software development to build and test
code before it’s pushed to production. In this chapter, we see
how to use multiple network automation tools, including
GitLab, Ansible, and pyATS, to apply CI/CD practices when
pushing configuration changes to the network.
Chapter 22, ROBOT Framework: In this chapter, we review the
ROBOT framework, an open-source test automation framework.
The ROBOT framework allows you to use English-like keywords
to define testcases. After we review the ROBOT framework, we
see how the pyATS libraries (Unicon, pyATS, and the pyATS
library (Genie)) are integrated into the ROBOT framework by
providing test libraries that include keywords to interact with
network devices and define testcases.
Chapter 23, Leveraging Artificial Intelligence in pyATS: This
chapter explores the integration of pyATS with Artificial
Intelligence, focusing on leveraging the OpenAI API for
enhanced network automation. It discusses the use of Retrieval
Augmented Generation with Langchain for intelligent data
handling and introduces rapid prototyping with Streamlit,
showcasing the potential for AI to revolutionize network
management processes.
Appendix A, Writing Your Own Parser: This appendix covers
how to contribute to the genieparser library
(https://github.com/CiscoTestAutomation/genieparser) by
creating a new parser for a Cisco IOS XE show command
Appendix B, Secret Strings: This appendix covers how to
protect the sensitive data in your of testbed.yaml files through
secret strings
Chapter 1. Foundations of NetDevOps
The landscape of enterprise networking has changed
dramatically over the past five+ years with an explosion of new
tools, technologies, and methodologies for building and
operating networks at any scale. Network automation and
programmability have also matured to the point that there are
expectations that modern solutions are designed and
implemented with an automate first, agile mindset. Networks
today are undergoing a similar evolution as voice-networks two
decades ago with command-line interface and manual effort.
Networks are rapidly being replaced with automation and
programmability. Network engineers now must consider
software development practices when planning, designing,
building, and operating networks in their day-to-day activities.
Gone are the days of manually drafting configurations device-
to-device and the focus is now on a more holistic, software-
driven, automated vision of network design, configuration, and
testing. Methodologies and practices in NetDevOps are adopted
and adapted from the broader DevOps movement, extending
these practices to the network domain.. Network automation
and programmability is no longer the future—it is the present—
and Python Automated Test Systems, to be pronounced “py A. T.
S.” (pyATS) is crucial to accelerating your NetDevOps. pyATS
was originally created by JB Aubin and Siming Yuan for
internal Cisco testing which has evolved into a free, public,
mass adoption network automation framework. Welcome to
Test Driven Automation with Cisco pyATS; your transformative
journey with pyATS begins here.
This chapter covers the following topics:
Traditional network operations
Software development methodologies
Comparing network automation tools
The modern network engineer toolkit
Traditional Network Operations
Historically, networks were predominantly driven by manual,
human-centric efforts, although this did not undermine the
significance of core principles, best practices, and foundational
designs crucial for well-implemented and operated networks.
With nearly fifty years of experience in the networking realm,
the relevance of OSI and TCP/IP models remains undiminished.
Individuals boasting profound knowledge, expertise, and
industry certifications often transition seamlessly into adept
NetDevOps developers, as they meld their networking prowess
with the contemporary tools and technologies essential for
automation. However, the journey of automation wasn’t
straightforward. Initially, operations leveraged various tools for
monitoring and managing configurations. These tools,
primarily element managers, were adept at scrutinizing the
active status of elements but fell short of monitoring the
systems as a cohesive whole. Automation at a larger scale was a
prerogative of those who could afford the coding outlay, as the
high costs stemmed from the lack of standardized interfaces on
vendor equipment and high-level language support. The
business case for automation was hard to justify without a
substantial scale that could offset the high initial costs. Fast
forward to recent times, the landscape has evolved favorably.
Vendors have lowered the entry barriers by prioritizing their
APIs, and the widespread support for languages like Python has
catalyzed the adoption of automation technologies.
Transitioning from a purely network-centric career to a hybrid
developer role now entails a lesser extent of training and
learning, thanks to the improved accessibility and support for
automation tools. The ensuing sections delve into the typical
duties of a network engineer and the application of these skills
within the NetDevOps framework, reflecting the changes
brought about by the more accessible automation solutions.
Architecture
The overall architecture of the network will still be required to
establish the desired outcomes. Appropriate hardware selection
for the physical layer of the network and the underlying
connectivity model are vital to driving the design of the
network. Architects also establish the best practices, validate
designs, and work with vendors in product selection. These
network architects are typically responsible for the overall
network; handle high level escalations; contribute to change
management boards to review and approve operational
changes; and interface with the enterprise leadership and
management teams to establish roadmaps and long-term
planning Ancillary appliances that supply wireless, access and
identity, automation or software defined network (SDN)
controllers, and other required tools necessary to meet the
service-level agreements (SLAs) and business requirements.
The modern high-level designs also often include the selection
of closed and open-source tools required to operate and
monitor the network. A greenfield network requires a lot of
planning and network architects don’t work in silos; in fact,
there is often a great deal of collaboration with the vendor’s
experts and specialists, the Internet Service Provider (ISP), the
Chief Technology Officer (CTO), Chief Information Officer (CIO),
Chief Information Security Officer (CISO), to translate business
requirements, budget, and SLAs. Ultimately, the network’s
purpose is to connect people and their systems and applications
together. Business needs are fulfilled by these applications and
people which in-turn are underpinned by the network.
Typically, the architect best understands the desired state,
baseline expected performance and services offered, and are an
escalation point in the event of any unexpected issues or
behaviors. In a modern NetDevOps environment, the architects
often plays the role of the senior developers over the network.
Code reviews, approving the merging of code via Git pull
requests, developing or refactoring network automation code,
ensuring the quality of Continuous Integration / Continuous
Development (CI/CD) pipelines, holding daily standups, and
playing a role in scrum teams are all additional expectations of
the modern network architect, beyond being a highly certified
capable network engineer.
High-Level Design
After an architecture has been established, a high-level design
is created which is often a visual representation of the network.
Various blocks of the modular architecture are connected and
east-west (traffic remaining with a module horizontally) and
north-south (traffic leaving a module northbound towards the
public Internet and network egress to the ISP or southbound
into the data center) traffic flows are determined. Various
protocols, network zoning (security perimeters and
boundaries), layer one interconnections, and levels of
redundancy are visualized into a high-level design. The speeds
of interconnections, optics and cabling types, wireless
standards and access-point placement are also part of the high-
level design. The architects or designated designers carry out
these tasks and the designs are often validated by the
collaborative architecture teams. Vendor validated designs,
request for comments (RFCs), industry best practices are used
as reference models during the high-level design phase.
Important artifacts such as the IP address scheme, network
protocol design (such as OSPF areas, if OSPF is the selected
routing protocol) and service (Power over Ethernet [PoE] and
802.1x) requirements accompany the high-level design. High-
level designs should be living documents that evolve and
change as the network goes through its operational lifecycle in
a continuous feedback loop.
Low-Level Design
Lower-level designs overlay the actual configuration derived
from the high-level design. Internet addresses are assigned as
subnets to areas of the network and individual interfaces are
assigned addresses. Routing and security protocol
configurations are drafted as traffic flows are enforced across
the network. The various spanning-trees are mapped out and
configurations established to ensure a loop-free topology with
limited sized broadcast domains (VLANs). Redundant links are
appropriately configured as interfaces are bundled and routers
are set up in highly redundant pairs. Usually, low-level designs
are developed from the center of the network-out. The vital
high-speed backbone and core routing configurations are
developed first with north-south traffic in mind. Distribution
and aggregation layers, also typically routed, are drafted next,
followed by access port configurations established for edge
features like power, identity, security, and aggregation in the
data center. Uplink port configuration standards are developed,
and the Layer 2 / Layer 3 boundaries are established. Firewalls,
access-control lists (ACLs) and other security boundaries are
implemented. Load-balancing, wireless controllers, access and
identities, and other ancillary service configurations are also
included in lower-level designs. Low-level designs are often
impacted the most by NetDevOps because not only do the high-
level designs have to be transformed into working functional
models of the network configurations, but these configurations
also need to be transformed into objects that can be used for
programming and automation. Data models, such as YANG,
configuration templates, tests, intended configurations, and
code to push the initial images and day zero configurations to
the greenfield devices also need to be included as artifacts in
the low-level design. There is also a growing popularity in
Virtual eXtensible LAN (VXLAN) technologies expanding on
Layer 2 connectivity in the Enterprise (Software-defined
Access), Data Center (Application Centric Infrastructure /
Nexus) and WAN networks (SD-WAN) adding even more to
consider in the design phase of the modern network.
Day Minus 1
Day–1 activity includes the translation of business
requirements into functional network building blocks including
creating initial architectures, high and low-level designs, and
topology diagrams. Procuring hardware and associated
software licenses, establishing service-level agreements both
with the business, vendors, and supporting third parties, and
preparing internal processes, procedures, and support models
are all part of day–1 activity in preparation for standing up the
network. All activities prior to actual device onboarding can be
described as day–1 activity where enterprises prepare to deploy
their new network. In this phase the enterprise typically
gathers the recommended software image releases from the
Internet as well as transform low-level designs into initial
device configurations. These device configurations are typically
the bare minimum to get a device up and running and often
require design and operational knowledge of what will be
connected to the device. Information is often entered via serial
connection to the management console of each device using the
serial number to identify the placement of the device in the
new network. Day–1 configuration are pre-requisite
configurations in order to establish a minimum level of off-
device connectivity such as the hostname, VTY line
configurations, a username and enable secret, possibly
hostname resolution (DNS) and cryptography (SSH keys). A
manual mapping from an IP address management (IPAM) and
low-level design is often performed by a human operator who
applies a minimal base connectivity enabling configuration
from a template of some kind or individual instructions per-
device. Often these configurations are performed in the central
warehouse or location where the devices have been received
from the vendor shipping them. Devices are potentially
barcoded, and their serial and part numbers recorded. Stacks of
devices are often assembled, and the basic stacking technology
deployed in the event of stacked devices. Power supplies, inter-
stack cabling, SFP (Small Form-Factor Pluggable) module
insertion, and the assembly of all devices take place either
within the warehouse or during the on-site truck-roll. At the
end of day–1 devices are typically flashed with the selected
software image, basic testing has been performed, and minimal
configuration applied. Devices are then repackaged with their
unique identifiers (hostname, management IP address) and
delivered to the data center, rack, or telecom closet for
installation and day 0 configuration. The sections that follow
look at some tasks that can be automated in Day Minus One.
Offline Initial Configuration
Using a testbed file, covered in detail in Chapter 4, “Testbeds,”
new devices can be described in human readable YAML format
at scale. By extending the YAML file to include the intended
state, the initial configuration can be derived from APIs or
Jinja2 templates in an “offline” approach. The result is a simple,
but validated, initial configuration file derived from the low-
level design operators can quickly and accurately apply via
serial connection in the warehouse. This is not only faster,
which is a goal of the typically highly manual process of day–1
configuration, but it is more accurate, of higher quality, and
dramatically simplifies the process for human operators.
Instead of having to manually adjust a set of default
instructions per device or translate an IP address plan into
management IP address configurations, pyATS can be leveraged
to automate this process and provide human operators with
ready-to-go initial configurations that are easily applied over
the serial connection.
Software Images
All network devices and appliances should have either the
latest or the vendor recommended release applied to them
during the day–1 process. Images can be distributed locally
using USB drives or downloaded from a centralized image
repository that is reachable after the initial minimal
configuration is applied.
pyATS can play an important part of day–1 activity and
automate both the initial base configurations as well as the
software image management. The pyATS Clean framework,
further explored in Chapter 15, “pyATS Clean,” can be used to
load new images and apply a base configuration.
Day 0
Day 0 is the on-boarding process. Devices are racked, stacked,
powered on, and interconnected in their location in the
network depending on the role they will play. It is at this point
where connectivity to the device’s neighbors, centralized
management, orchestration, and monitoring systems are
established. Access from distributed operator workstations or
management zone is also established provided access to pyATS
to perform additional configuration. The remaining
configuration items that rely on the device being connected to
the full network topology can now be pushed from pyATS.
There are many advantages to using pyATS to complete the day
0 onboarding configurations, including the following:
Intent-driven configuration from the testbed YAML file for a
single device or an entire new topology at scale.
Templated configurations from APIs or Jinja2
Pre-change state capture
Automated configuration deployment
Set of initial pyATS tests validating the onboarding
Post-change state capture
Differential output comparing day–1 and day 0 configuration
state
Automated state capture
Automated business-ready documentation
With pyATS network engineers can move towards “zero-touch
provisioning” (ZTP) without using Plug N Play (PNP), DHCP, and
TFTP / FTP / SCP server infrastructure to achieve automated
onboarding of any size or scale. The sections that follow look at
Day 0 activities than can be automated with pyATS.
Layer 1
Layer one wiring and cabling of infrastructure is still,
unfortunately, required and device interconnectivity (which
ports connect to which ports) still needs to be mapped out from
the low-level design. One major advantage of pyATS is that
these interconnects can be quickly and automatically tested as
part of the onboarding job. The presence of certain neighbor
relationships, like Cisco Discovery Protocol (CDP) or its open
standard counterpart Local Link Discovery Protocol (LLDP), or
even OSPF or BGP neighbors, can be added as tests in the
provisioning job. Engineers can review the job logs to quickly
and easily determine if the provisioning was successful based
on the results of these tests. PING tests can be used to validate
reachability from a given device to another destination in the
network; interface tests can be used to confirm the lack of
errors or the presence of full-duplex connectivity, at scale,
across entire testbeds. Network engineers can be reassured by
pyATS that devices have been deployed correctly and that the
required wiring is in place at the end of the onboarding
process.
Initial Configuration
Initial configuration can be derived from the combination of
the pyATS testbed and either pyATS Application Programming
Interfaces (APIs) or Jinja2 templates (expanded upon in Chapter
7, “Automated Network Documentation”) that provide the Cisco
IOS configuration code required to complete the devices
onboarding process. Much like the initial day–1 configuration,
each device can use an intended configuration abstracted from
Cisco OS configuration stanzas into human readable structured
data files. Every aspect of a device’s configuration can be
transformed in the simple testbed file and the actual
configuration called from either pyATS APIs or Jinja2 templated
configurations. Using NetDevOps approach and Git version and
source control the entire lifecycle of a device’s configuration
can be tracked from day–1 initial configuration and onboarding
configuration all the way through to day N configuration. In
fact, using this approach with pyATS day–1 and day 0 initial
configurations could be merged into a single intent based,
templated, configuration reducing the deployment lifecycle to a
single stage of development. If you either apply initial minimal
configurations and then complete the full configuration over
two phases or merge the initial configurations to be total and
complete in day–1 there is a major reduction of error-prone,
human driven, initial configuration management with pyATS.
Initial Testing and Validation
In the preceding section titled “Layer 1,” we discussed how
pyATS opens up a myriad of testing possibilities post the initial
onboarding phase. This encompasses reachability, neighbor
relationships, interface configuration, and counter information
assessments—all crucial for ensuring a smooth onboarding
process. However, as will be unraveled in the subsequent
chapters of this book, pyATS’s utility goes beyond just these
preliminary tests. Any output from a show command can be
rigorously scrutinized through pyATS tests. Once the initial
onboarding configuration is deployed to the device by pyATS,
you have the freedom to design tests tailored to your specific
needs, aiding in validating the success of the onboarding
processes.
Before the onsite technicians conclude their tasks at the
deployment site, pyATS tests serve as a reliable tool to ascertain
that no manual interventions or amendments are necessitated.
Devices can be drop shipped with pre-configurations, and upon
arrival, be automatically tested to ensure they are in a healthy
state, connected accurately, and configured in alignment with
the intended design. This eradicates the traditional dependence
on the CLI, sifting through the output of various show
commands, and relying on human operators to undertake the
challenging task of validating a device’s configuration and state.
Instead, pyATS automates these tasks, with its job logs and
HTML log viewer significantly simplifying the most arduous
aspect of day 0 configuration—ensuring accurate configuration
and connectivity during the onboarding phase.
The narrative so far has revolved around the correctness of an
individual device being onboarded and its interaction with
neighboring devices. However, there’s a broader perspective to
be explored. pyATS is not confined to device-centric validation;
it extends to scrutinizing the state of the entire network system.
For instance, even if the configurations are accurate, interfaces
are active, and routing adjacencies are established, there might
be underlying routing faults or perhaps a newly defined prefix
or ACL malfunctioning. These issues might manifest at an area
border router or a peering router situated elsewhere in the
network. pyATS empowers us to delve deeper and test for such
network-wide anomalies too! It’s not just about the correctness
of individual device configurations but a holistic examination
of the network system’s state, ensuring everything operates
cohesively and as intended across the network infrastructure.
Day One
Day one can be demarcated by everything that occurs after
network devices are onboarded, provided their initial
configuration(s), tested, and validated as ready for service. Day
one can be further broken down into day two and day N
configurations for simplicity and provided further demarcation
of tasks. pyATS plays a critical role in day one allowing network
engineers and operators to update their intent files to reflect
required changes to the network configurations. Intent should
be the only place network staff need to update but in cases of
provisioning new services, new APIs or Jinja2 templates will
also need to be developed to support services that were not in
scope of the original intended configurations. Using Git and a
Git repository these changes should also be part of a working
branch, tested and validated, and ultimately merged back into
the main branch and deployed to the network devices as part of
a Continuous Integration / Continuous Delivery (CI/CD) pipeline
which are covered in detail in Chapter 21, “CI/CD with pyATS.”
Human error is virtually eliminated in this CI/CD as only the
easy to read, human compatible, YAML file is changed and
pyATS takes care of the rest of the configuration management.
The sections that follow covedr some Day One activities than
can be automated with pyATS.
Incremental Configuration
As the network evolves over time and things change the
configuration of the network also evolves and changes. This
could be something as simple as a new network time protocol
(NTP) source or dynamic host control protocol (DHCP) server
that needs to be configured to make the network devices aware
of them. Prior to network automation, operators would need to
connect to each device in the network, possibly dozens or
hundreds of devices, and manually apply the changes using the
CLI. This was time consuming, error-prone, and obviously not
the best use of a person’s time. Configuration drift between the
intended configuration and state and the actual running-
configuration also occurs over time. Until every device is
reconfigured there is disparity between devices. Changes might
take hours, days, weeks, or even months to complete depending
on the size of the network and available resources. These
changes are also prone to human error where the wrong
device, incorrect information, or simple missed keystrokes
could lead to disparity between intent and actual
configurations. In a worst-case scenario outages or
interruptions to the flow of network traffic are introduced
accidentally as a result of these human errors.
The reconfiguration is also only part of the story; testing and
validating the changes are also required compounding the time
it takes to establish confidence the change was successful.
Often, the validation can take longer than the actual change
itself.
pyATS really shines in day one activities as not only can it
derive a configuration directly from the intended state, but it
can also configure, test, and document these changes at scale,
automatically, without a human ever having to log into the CLI:
Capture pre-change configuration
Test pre-change state
Report errors and abort the change
Confirm error free state
Push configuration to the device
Test post-change state
Report errors and rollback the change
Confirm error free state
Confirm intent delivered
Capture post-change state
Update documentation
Perform a differential
Display changes to configuration and state
Provisioning New Endpoints
Networks grow in size as capacity is required. Fortunately, with
pyATS, all we need to do is add new devices intended
configuration to the testbed file. The initial configuration will be
generated for offline installation and the initial image and
configuration automated. The new devices can be tested and
existing device tests modified to accommodate the presence of a
new device. So not only can new devices be quickly and
accurately added to existing networks and tested as new
devices, the existing network can be regression tested to
validate no unintended consequences or impacts occurred as
the result of the presence of a new device.
Provisioning New Services
The only time the development of anything outside the testbed
file is when new services are provisioned. Both the data model,
the YAML intent in the pyATS testbed file, and accompanying
pyATS APIs or Jinja2 templates, will need to be created to
provide these new services. If, for example, quality of service
(QoS) was not deployed as part of the day-0 onboarding but
needs to be configured to achieve the business SLAs, support
new devices such as voice over IP (VoIP) devices or IP cameras,
or to reduce congestion in the new network topology, the QoS
model needs to be mapped to the intent data model in YAML
and then the appropriate APIs or Jinja2 templates developed.
The pyATS tests to support the validation of the implementation
also need to be developed. pyATS can reduce the time it takes to
develop and deploy new services as networks become “agile”,
not “fragile”.
Day N
Day N activities involve the day-to-day activities performed to
maintain a healthy network state. In the past, network
operations typically monitored the network for system logging
(syslog) events pushed from the network in response to activity
on a device and simple network monitoring protocol (SNMP)
polling (pulled from the device at intervals) or traps (events
pushed from the device) and respond according to these events.
The sections that follow describe Day N network managing and
monitoring activities in more detail.
Monitoring (and Now Testing)
Traditional network monitoring can be greatly augmented with
pyATS testing. In addition to syslog and SNMP events, the
network can now be proactively tested by pyATS to confirm the
intended configuration and state is maintained. pyATS also adds
context to sometimes vague syslog or SNMP information. A
typical alarm from syslog might indicate an interface has gone
down leaving operators to determine the impact of losing that
interface. This analysis might take time and low-level designs
might need to be referenced to determine the impact of a
particular interface indicating it is down. With pyATS it can be
quickly and easily determined that not only has an interface
gone down, but what other impacts to a device or to an entire
topology that failed interface has had. Multiple pyATS might
start to fail such as neighbor relationships and establishments,
routing protocol or routing table tests might start to fail, PING
and other connectivity tests might start to also fail. The scope of
the impact can easily be determined using ongoing scheduled
pyATS tests and the severity of the outage established rapidly
without ever having to login to a device or multiple devices to
assess the impact. This is just one example of an interface going
down but every facet of a device or entire network’s
configuration, state, and health can be continuously tested by
pyATS arming network operations with automated capabilities
far beyond syslog and SNMP.
Responding to Events
In addition to traditional monitoring and the new continuous
testing capabilities offered by pyATS day N management of
networks includes responding to events. These events
encountered by network generated syslog or SNMP traps, or
failed pyATS tests, which network operations need to respond
to. One of the major benefits of pyATS is that it provides built in
alerting capabilities including sending test reports by email or
even messaging platforms such as WebEx, Slack, or Discord.
Individual failed tests can also be sent as Webex alerts to the
operators or engineers responsible for responding to and
remediating failed tests. pyATS allows for proactive response to
events as failed tests can identify user impacting failures
immediately and alert those responsible for addressing the
situation. This is a paradigm shift from either trying to make
sense of multiple syslog or SNMP traps or, worse, responding to
calls from the users impacted by degraded network state.
Upgrading
Software image management is also a major part of day N
operations and network management. Networks vendors such
as Cisco release frequent updates to address flaws in existing
software releases or to release new features and components to
the software image. Security patches are also a major
consideration and should be addressed according to the
severity of the security flaw. With the combination of pyATS
testing and pyATS Clean framework, software image
management becomes dramatically easier as operations are
able to regression test new images against known working
state. pyATS tests can be used to validate the software upgrade
has not modified the configuration or state of the network
compared to the previously working baselined tests. The pyATS
Clean framework can be used to deploy software and related
configuration ensuring successful software upgrades. Should
commands need to be modified as the result of a new software
image the pyATS testbed (human readable intent), pyATS APIs,
or Jinja2 templates can be modified and new configurations
generated and deployed to respond to changes in the structure
of device configurations, deprecated configurations, or net-new
configuration requirements. All automatically, quickly (with
agility), and with the highest quality. Once again pyATS can be
used to test individual devices in isolation as well as the entire
topology before and after software images are upgraded.
Decommissioning
At the end of a device’s lifecycle, it will need to be
decommissioned and likely replaced with a newer model. The
process starts over again from day–1 and the replacement
device are onboarded. One major advantage of using pyATS is
that the original device’s intent and configuration APIs or
templates can be used to provide a baseline and reused to
develop the replacement device’s configuration. The data model
and templates may need to be adjusted but the operator is not
starting from scratch. Both the configuration and test suite can
be reused to provide rapid device replacement and
decommissioning and reprovisioning of new devices
Software Development Methodologies
The development practices, processes, and culture are as
important, if not more important, than the code or network
configurations and are often the most difficult changes to adapt.
Traditional networking has mostly been following the waterfall
methodology, legacy processes from almost fifty years ago!
Software developers have adopted a more modern approach,
known as Agile, which emerged in the early 2000s which is a
big reason the world around us is dominated by ever evolving
applications and innovation. Can we bring these Agile practices
to the world of networking? Yes, it is known as NetDevOps. It is
important to understand the evolution from waterfall practices
to Agile practices and DevOps to see how well this new
approach works with modern network engineering. The
sections that follow look at various software methodologies in
contrast.
Waterfall
Most networks have historically been managed using the
waterfall methodology. In fact, if you review the traditional
network section the various stages of the waterfall process can
be mapped to the architecture, high and low-level designs, and
day–1 to day N activities. Waterfall revolves around a linear
progression from one phase to the next with an emphasis on
gathering and defining requirements at the beginning of the
project and building solutions in rigid phases of the project’s
lifecycle. Each phase builds upon the previous phases
deliverables as work flows down the waterfall:
1. Requirements are captured in a network requirements
document
2. High level details lead into lower-level details. These details
come from requirements which are analyzed to produce IP
schemes, subnets and VLANs, routing, ACLS, and other network
models and schemas are produced
3. A network architecture and high-level designs are created
4. Network configurations are developed from the architecture
and designs
5. Devices are on-boarded and configured
6. Testing occurs and networks are debugged and validated
7. Operations take over and the resulting network is monitored
According to waterfall a new phase can only begin when the
preceding phase has been reviewed and tested leading to a
potentially long and rigid overall project. The waterfall
approach was widely used in the 1970s and 1980s, particularly
in software development and project management.
Lean
In the 1980s the Japanese automotive company Toyota
developed a system known as “The Toyota Way” or the Toyota
Production System (TPS) in an effort to improve efficiency by
reducing waste. This system was coined “Lean” in 1988 in the
John Krafcik article “Triumph of the Lean Production System”
and defined in 1996 by American researchers James Womack
and Daniel Jones. Lean production primarily focused on
reducing production times and delivering “just-in-time"
manufacturing (JIT) matching production to demand. A major
byproduct of Lean was the elimination of waste from the
production processes and the use of automated quality controls.
The five key principles of Lean, as outlined by Womack and
Jones, are:
Precisely specify value by product
Identify value stream for each product
Make value flow without interruptions
Let customers pull value from the producer
Pursue perfection
Ultimately Lean was defined as a way to do more with less and
maximize efficiency.
Agile
Stemming from the Lean changes in the manufacturing sector
and management approaches in the early 1990s and seeking to
define a new approach to software development and
operations, seventeen software developers met in 2001 to
discuss lightweight development practices and released
“Manifesto for Agile Software Development” which defined what
they valued as software developers. This new approach strived
to bring some of the Lean manufacturing principles, such as
reducing times in the production process, “just in time” (JIT)
manufacturing, and the overall elimination of waste from the
processes, and apply them to software development. This was
in sharp contrast with previous heavyweight development
approaches such as the predominate waterfall methodology.
Agile is based on twelve principles:
1. Customer satisfaction by early and continuous delivery of
valuable software.
2. Welcome changing requirements, even in late development.
3. Deliver working software frequently (weeks rather than
months).
4. Close, daily cooperation between business people and
developers.
5. Projects are built around motivated individuals, who should
be trusted.
6. Face-to-face conversation is the best form of communication
(co-location).
7. Working software is the primary measure of progress.
8. Sustainable development, able to maintain a constant pace.
Agile processes promote sustainable development. The
sponsors, developers, and users should be able to maintain a
constant pace indefinitely.
9. Continuous attention to technical excellence and good design
enhances agility.
10. Simplicity—the art of maximizing the amount of work not
done—is essential.
11. Best architectures, requirements, and designs emerge from
self-organizing teams.
12. Regularly, the team reflects on how to become more
effective, and adjusts accordingly.
DevOps
DevOps is a set of practices, processes, and tools that bring the
world of software development (Dev) and IT Operations (Ops)
together as one cohesive discipline. Much like Lean, the goals of
DevOps are to shorten the systems development lifecycle (SDLC)
while providing continuous improvement and high software
quality. Many ideas in DevOps come directly from the Agile
methodology. The key principle of DevOps is breaking down
barriers and silos of developers and operations bringing a
shared sense of ownership over the product. Automation is a
core component of DevOps and the tools used to implement
DevOps are critical to its success. Automated build and testing,
continuous integration, continuous delivery, and continuous
deployment, which originated in Agile, are pillars of DevOps.
Expanding into Networks
Seeing the obvious benefits of Agile and DevOps, the world of
networking has adopted the practices, processes, and many of
the tools to adopt NetDevOps, especially after the availability of
many new network automation tools, including pyATS. Hank
Preston, Principal Engineer at Cisco, defines NetDevOps as
follows: “NetDevOps brings the culture, technical methods,
strategies, and best practices of DevOps to Networking.” The first
point Hank makes is that the culture of DevOps, as well as the
technical aspects, need to be adopted by the networking team
first to adopt NetDevOps. Networks need to become Agile, and
not fragile, while still respecting the blast radius that impacts
on the network have.
Infrastructure as Code (IaC)
One of the first breakthroughs for NetDevOps was the
widespread adoption of what has become known as
Infrastructure as Code. Elements of infrastructure, like the
network, are defined in structured data like YAML Ain’t Markup
Language (YAML) and JavaScript Object Notation (JSON) and
configurations delivered with Python, RESTful APIs, or
traditional CLI commands abstracted as code elements.
Treating infrastructure as code enables NetDevOps and the
application of software development practices like Agile to
traditional waterfall network lifecycles. pyATS is an
implementation of IaC allowing NetDevOps to use Python to
program solutions for the network.
Test-Driven Development
By using a Python library and treating the infrastructure as
code not only can NetDevOps be applied and Agile principles
enacted, but other software development practices can also be
utilized to maximize infrastructure as code quality including an
approach known as test driven development. Testing is built
into the process as requirements are broken down into small
units known as test cases. Instead of building a full product and
then testing it, testing occurs during the development cycle and
each unit is tested, and finally all tests are executed against the
whole product. This approach fits very well with NetDevOps,
and pyATS in particular, as network requirements can be
broken down into consumable sized test cases, developed and
configured, and then tested for quality. TDD encourages simple
designs and, according to Kent Beck who is credited with
developing the technique, “inspires confidence” in developers.
NetDevOps
NetDevOps, a philosophy blending the methodologies, cultural
practices, tools, and Agile approach from DevOps with network
operations, facilitates a seamless transition from day -1 to day N
network activities within the DevOps model. Central to
NetDevOps is the ethos of continuous improvement,
characterized by delivering high-quality code through frequent
releases and iterations, enhancing network performance and
reliability. Figure 1-1 delineates the DevOps Lifecycle,
elucidated in greater detail in the subsequent sections.
Cisco pyATS, a pivotal tool in the NetDevOps arsenal, seamlessly
integrates within this lifecycle, acting as a catalyst in
automating and validating network states and configurations.
Here’s how Cisco pyATS interlaces with the NetDevOps process:
1. Planning & Coding: In the initial phases of the DevOps cycle,
network designs and configurations are conceived and coded.
Cisco pyATS can be leveraged to script automated tests ensuring
the integrity and efficiency of network configurations right
from the drawing board.
2. Testing: Post the coding phase, Cisco pyATS shines in the
testing domain. It automates the validation of network
configurations, reachability, and the state of various network
elements. This automation expedites the testing process,
ensuring that any deviations from the intended configurations
are promptly identified and rectified.
3. Integration & Deployment: As new code and configurations
are integrated and deployed, Cisco pyATS continues to play a
crucial role. It aids in automating the deployment process while
concurrently running validation tests to ensure seamless
integration with existing network setups.
4. Operation: During the operation phase, Cisco pyATS
facilitates continuous monitoring and validation of the
network, ensuring it aligns with the defined operational
standards and configurations. This continuous validation is
instrumental in maintaining network reliability and
performance.
5. Monitoring & Feedback: Cisco pyATS provides a robust
framework for monitoring network states and collecting
valuable feedback. This feedback is crucial for identifying areas
of improvement, which feeds into the planning phase for
subsequent iterations, thus completing the cycle.
6. Continuous Improvement: By providing insightful data and
automation, Cisco pyATS fosters a culture of continuous
improvement within the NetDevOps framework. It facilitates
quicker iterations and releases, ensuring that the network is
always optimized and aligned with evolving organizational
needs.
Incorporating Cisco pyATS within the NetDevOps framework
elevates the efficiency, reliability, and agility of network
operations, embodying the continuous improvement spirit
central to the NetDevOps philosophy. Through automation and
validation, Cisco pyATS propels networks closer to the
aspiration of self-operating and self-healing infrastructures,
making it an invaluable asset in modern NetDevOps practices.
Figure 1-1 DevOps Lifecycle
Plan
Fail to plan, plan to fail. For “greenfield”, or net-new, networks
we begin with planning. Business requirements, service level
agreements, architectures, high and low-level designs drive the
planning phase of NetDevOps.
Code
Instead of manually crafting configurations in NetDevOps we
start to create data models, templates, and code to generate and
deliver configurations to the devices. pyATS testbeds, covered in
depth in Chapter 3, “Testbeds,” describe devices and topologies
that are modeled in YAML as intent. Jinja2 templates or pyATS
APIs are also created to create the Cisco OS configuration code
substituting values from the YAML intent in the form of
variables. Using pyATS developers can approach the coding
phase using test driven development (TDD) creating tests that
initially fail, and then are coded to pass, to validate intent and
connectivity.
Build
Code is packaged up in builds as part of the Continuous
Integration (CI) portion of CI/CD. Build often implies the
creation of a delivery vehicle for the code—in the case of pyATS
this could be a pyATS job. pyATS jobs can also be further
packaged up into Docker container images as part of the build
process. Builds are automated using tools in the CI/CD process.
In DevOps builds are typically packaged software but in
NetDevOps builds represent intended configurations, templated
configurations, connectivity, configuration, and integration
tests, automated documentation, and other network-centric
code in the form of pyATS jobs.
Test
As part of the CI/CD process after we have a functional build
and individual pieces of code tests using TDD a larger testing
process occurs in NetDevOps. Our build is tested, our full set of
tests are executed, and larger, complete end-to-end tests are
performed. The goal is to identify bugs or flaws in the code as
early as possible. Linting, programmatically checking code for
syntax or stylistic errors, is also performed as part of the test
phase improving the quality of the code. Passed tests are also
part of the version and source controls acting as a gate in the
approval process used to merge code into the code base. Human
approvals and quality assurance are also performed in the test
phase prior to releasing code.
Release
Testing for infrastructure as code often includes a Continuous
Delivery step of CI/CD where the build is released to a virtual or
physical, non-production, environment. This could be a mocked
up smaller scale representation of the production network in a
network simulation platform such as Cisco Modeling Labs
(CML) or physical lab or pre-production environment. The build
is delivered to this environment where more comprehensive
integration testing can occur. Everything from connectivity to
configuration and network state can be tested and bugs
identified. In the event of failed tests at this phase the process
returns back to the planning, coding, building phases of
NetDevOps allowing for high quality, non-disruptive, perfected
builds before moving to the release, or Continuous Deployment,
phase of the process. After passing all tests in the pre-
production environments the candidate release is moved into
the deployment phase.
Deploy
DevOps deployments are software in nature deploying the
latest version of code to systems and users. In NetDevOps
deployments, the Continuous Deployment portion of CI/CD push
configuration changes to the production network. This might
not be immediate as in the case of software as the impact on
networks compared to the impact on software is much greater.
Change management approvals and release scheduling are
included in network deployments and the actual release will
need to be automatically triggered during the approved change
windows. Automated deployments run the pyATS job within the
Docker container resulting in changes being pushed to the
network devices. pyATS tests from the previous step are
executed against the production environment to confirm and
validate the change was successful and the intended
configurations integrated without impact or network
degradation. Using infrastructure as code and the CI/CD
process, should the pyATS tests fail, a rollback could be
triggered in the event of failed tests returning the network to its
previously known good configuration state.
Operate
The Ops portion of NetDevOps takes over and standard network
operations continues to support the network including the
newly released configurations and features. The reduction of
silos and the collaboration between developers and operators
provides immediate and continuous feedback validating the
network performance or recently released features and
configuration are working as intended. There is no hand-off
here as under the waterfall methodology; in fact, the operations
team have been integrated with the development portions of
this continuous cycle. Developers are also able to provide direct
support and handle escalations from operations ingraining
themselves in the operations of the network. In a “brownfield”,
or existing network, the NetDevOps cycle may actually start at
the operations phase where developers and operators
collaborate to tackle the automation of an existing network.
Systems previously deployed manually under waterfall are
analyzed by developers and operators and the configurations
and state of the network are transformed into pyATS-driven,
automated, tested, intent-based networks. The real-world
experience from operations is used to provide developers the
ability to start the planning phase of NetDevOps and the cycle
starts.
Monitor
Traditional network management systems (NMS), syslog events,
SNMP events, and, now, pyATS testing results, are all used to
monitor the health and performance of the network. The
impact of the release is monitored as well as the overall state of
the network and metrics are used to provide feedback to the
NetDevOps team to start the planning phase for the next
release, either to remediate flaws from the previous release, or
to add additional enhancements, configurations, tests, or
capacity. This process is followed in an endless cycle of
continuous improvement following a version and source-
controlled CI/CD pipeline driven by pyATS.
Additional Benefits of NetDevOps
The NetDevOps lifecycle itself will bring many benefits and
advancements in your journey towards test-driven automation.
Some of these benefits will be technologically based and others
cultural. The primary benefit is the elimination of the silos
between developers and operators which fosters an open and
collaborative culture. Let’s explore some ancillary benefits to
NetDevOps.
Single Source of Truth
One of the major benefits, in addition to the Agile methodology,
automated CI/CD pipeline, and test-driven development, of
NetDevOps is the creation of a source of truth. Legacy networks
are plagued by the lack of a central authoritative source of truth
– what is the intended configuration and state of the network?
Is it the collection of running-configurations of each device in
the network? Is it offline in a spreadsheet? Is it in an engineer
or operator’s head? NetDevOps creates version and source
controlled intent files, templated configurations, tests and test
results, and the mechanism (the CI/CD pipeline) to develop and
release changes automatically. Configurations can also be
stored and managed in in a network source of truth platform,
such as NetBox or Nautobot. Occasionally, under degraded
circumstances, human operators may need to manually
intervene and make changes directly at the CLI of some devices
to restore critical network connectivity, which can lead to
configuration drift and come into conflict with the source of
truth. Immediately after remediating the network manually
these changes need to be reflected in the code base, new intent
and templates generated, and new or existing tests created and
modified. Adoption of NetDevOps will reduce these “priority
one” events overtime and less and less human intervention will
be required
Intent-Based Configuration
A major component of the single source of truth is the intent:
what the network engineers, developers, and operators agree
should be the configuration and state of the network to fulfil the
business requirements and establish a healthy, secure,
redundant, resilient, high-performance network. Using pyATS,
intent can be reflected in easy, human readable, YAML files
which are not concerned with a device’s OS specific
configuration and abstract away the complexity of the working
configuration. These intent files can be sourced from either
templates or API calls to a source of truth that handles the
generation of working device OS configuration code. The
“compiled” composite output provided by pyATS is a working,
valid, OS-specific configuration. pyATS testing can then be used
to validate and confirm the actual configuration matches the
intended configuration. Configuration drift is eliminated as is
human intervention at the command-line. What should be the
configuration of a device is always known and can be
referenced offline using a single source of truth. Version and
source control is applied to the single source of truth, intended
configuration files, and templates used to generate the code.
Version and Source Control
As an infrastructure as code approach is adopted version and
source control become vital to successful a NetDevOps culture.
Working code, the known good intent, templates, and tests, are
protected in a Git repository and Git is used to provide the
mechanism to safely develop iterations of the code base. Git
also empowers NetDevOps collaboration allowing individual
members of the team to develop their own code inside branches
as part of the NetDevOps code stage of the cycle Code is tested
and validated and then merged into the main repository. The
entire lifecycle of a device’s intent, test and test results,
documentation, and configuration is preserved and versions of
the network are available from initial onboarding, day-N, and
end of life. Rolling back to a previous version of known-good
state can be done via the release process. Sometimes known as
GitOps, this process drives the CI/CD pipeline. pyATS jobs, tests,
intent, and templates are all protected by the version and
source control process and only validated code is ever used in
the build, release, deploy stages of NetDevOps.
GitOps
NetDevOps relies heavily on Git and Git repository systems.
GitOps is the process of triggering the CI/CD pipelines when Git
branches are merged into the main branch. NetDevOps
developers make a working branch from the main branch, to
code a new feature or address deficiencies in code iterations,
test their code, and submit a pull request, requesting their
branch be merged into the main branch (the single source of
truth containing only validated, working, intent). Once
approved and completed the pull request kicks off the CI/CD
pipeline and tests, builds, releases, and deployments occur
automatically. Git is much more than just version and source
control; it is a key component of the CI/CD and NetDevOps.
Efficiency
NetDevOps eliminates wasted effort and efficiency is achieved
using automation. Humans reclaim the time previously spent
drafting and configuring changes device-by-device manually.
Everything from tests to documentation to configuration
management is driven by automation and people are
empowered to spend their time on more valuable problem
solving. Quality is built into the process and only validated
changes are released to the production network reinforced by
automated testing at every stage of the lifecycle. In a well-
defined NetDevOps process only the intent, templates, and tests
ever need updating by human beings who rely on automation
tools, version and source control, and CI/CD pipeline to perform
previously manual processes. Networks can be treated as cattle,
and not pets, and ‘snowflakes’ are eliminated as the single
source of truth ensured uniformity across the network.
Speed
Automation, often synonymous with enhanced speed and
performance, brings about a paradigm shift in network
operations. Its prowess in exponentially outperforming manual
processes, especially at larger network scales, is unequivocal.
While a human operator is still launching their terminal client,
logging in, authenticating, and navigating the CLI, most
automation tools have already accomplished their tasks. The
velocity of automation transcends just the configuration
changes or network state captures; it significantly expedites the
testing phases preceding and following these changes. Network
operators seldom find the task of adding a few lines of
configuration to a network device or multiple connected
devices time-consuming; rather, the bulk of their time is
invested in validating the impact of these changes. They need to
ensure the desired outcomes are achieved without introducing
any unintended repercussions that might degrade the network
service.
Incorporating the NetDevOps approach alongside tools like
pyATS can dramatically shrink the change windows from days
or hours to mere minutes or seconds. This transformation
propels networks from being fragile and static, where changes
are a harbinger of potential problems, to becoming agile
entities. In these agile networks, frequent alterations can be
executed not only swiftly but with a high degree of quality.
Yet, the velocity afforded by automation carries a double-edged
sword. It undeniably speeds up network operations, but
concurrently, it has the potential to propagate errors or induce
network issues at an equally accelerated pace. As the adage
goes, "With great power comes great responsibility!" The
discussion hitherto has skirted around a critical aspect—RISK.
The rapidity of automation can swiftly escalate into a network
debacle if not wielded judiciously.
A robust risk mitigation strategy is indispensable to harness the
full potential of automation while averting the pitfalls.
Employing comprehensive unit and system tests both pre- and
post-implementation is a prudent practice. These tests serve as
a safeguard, ensuring that the automation scripts are
functioning as intended and the network remains resilient post
changes. By meticulously managing out the risks through
rigorous testing, automation transitions from being a potential
liability to a formidable asset in network operations. This
balanced approach empowers network operators to exploit the
efficiencies of automation via NetDevOps and pyATS, while
keeping the associated risks at bay. Through a disciplined
execution of testing protocols, automation in network
operations transcends from being merely a tool for speed, to a
well-oiled machinery driving speed, quality, and reliability in
an ever-evolving network landscape.
Agility
The phrase "The network has long been the bottleneck in
responding to ever increasing demands from the business"
encapsulates the challenges faced by traditional network
infrastructures in keeping pace with the rapid evolution of
business needs. As businesses burgeon and diversify, the
demand for more robust, scalable, and agile network systems
escalates. However, legacy networks, often encumbered by
manual configurations, scalability limitations, and slower
adaptation to new technologies, struggle to meet these
burgeoning demands swiftly. This lag in network adaptability
hampers operational efficiency and the timely execution of
business strategies, thus acting as a bottleneck in fulfilling the
ever-increasing business requisites. Applications, servers
(compute and storage), and security, having adopted DevOps,
are rapidly made available while the organization waits for
network services to enable these new capabilities. NetDevOps
allows the network to respond rapidly to business demands by
working with network infrastructure as code. Again, networks
can become agile not fragile. NetDevOps simply need to update
their intent, develop new templates and validation tests, and
perform the build, deploy, release stages iteratively. The
business no longer waits possibly days or weeks for the
network teams to respond manually in a waterfall process.
Distributed, collaborative teams, can rapidly develop and
release the changes required to respond to business demands
using the NetDevOps approach.
Quality
Above speed and agility, the number one driver for NetDevOps
is the quality of solutions. According to a recent Uptime Institute
survey, “How to avoid outages: Try harder!,” 70-75% of data
center failures are caused by human error, producing a chain
effect of downtime. Additionally, more than 30% of IT services
and data center operators experience downtime or severe
degradation of service. 10% of the survey respondents reported
that their most recent incident cost them more than $1 million.
Elimination of human errors using NetDevOps methodology
and automation tools like pyATS is the number one driving
factor for the adoption of infrastructure as code solutions.
While moving faster and with more agility are definitely
beneficial to the business, the reduction of human errors, and
thus network outages, is paramount to a successful NetDevOps
implementation
Comparing Network Automation Tools
Along with culture and software development lifecycle
methodology changes NetDevOps brings new tools to the
network engineer’s toolkit. For forty years most engineers have
had a few simple tools, a terminal client and a text editor, to
perform their day-to-day tasks to configure, validate, document,
and test their networks. Over the past five years, an explosion
of new tools has emerged dramatically simplifying, and at the
same time complicating, the role of a network engineer:
Agent-Based Tools: Agent-based tools require software to be
installed onto the network device acting as an agent for remote
commands. Tools like Puppet, Chef, and Salt Stack are examples
of popular network automation tools that require an agent. A
centralized controller communicates using an agent protocol to
the agents deployed into the network devices. Agent-based tools
have lower bandwidth requirements, are perceived as more
secure, and have a central point of management with the trade-
off of performance, cost, and deployments times.
Agentless Tools: In contrast with agent-based tools, agentless
tools use protocols like SSH or HTTPS from distributed systems
to perform their automation tasks. Nothing needs to be installed
on the network devices to enable agentless automation tools.
There is no centralized controller and typically costs and
complexity are drastically reduced. Ansible has become an
extremely popular network automation tool partly because it is
agentless. Performance, costs, and deployment time are all
benefits of agentless network automation tools. pyATS is an
agentless tool requiring only SSH, Telnet, or HTTP / HTTPS
access to a network device. pyATS testing can even be
performed against offline JSON files without any connection to
the device at all.
The Modern Network Engineer Toolkit
In addition to network automation tools, agent or agentless, the
modern network engineer’s toolkit includes many additional
tools and technologies to perform NetDevOps tasks. Gone are
the days of having a simple terminal client and text editor;
network engineers must embrace the growing landscape of
tools in order to approach problem solving as a software
developer that simplify their roles and increase quality, time to
delivery, and agility. The sections that follow look at the various
tools and technologies that should be part of every modern
network engineer’s toolkit.
Integrated Development Environment
Infrastructure as code implicitly requires NetDevOps to write
code. While a simple text editor could be used, and integrated
development environment (IDE) will help network engineers
write quality code with a superior experience. In this book we
will be using Visual Studio Code (VS Code) as the IDE of choice,
however there are many alternatives available.
“Old School"
Historically simple text editors like Notepad were used to draft
configuration changes and to consume network state from the
command-line offline. The historical text editor is quite limited
in its capabilities and functionality. With the advent of new
structured data types such as JavaScript Object Notation (JSON)
and YAML Ain’t Markup Language (YAML), as well as having to
write code in software languages like Python and JavaScript,
using a plain text editor leads to poor quality code with syntax
and indentation errors (particularly prone in YAML and
Python) that will not compile or execute properly. Users are
forced to manually detect these flaws often using time
consuming trial and error approaches. Classic text editors are
also not aware of version and source control, cannot preview
rendered output such as Markdown or Comma Separated
Values (CSV) files, which require another tool to view in their
rendered state. Working directly with “raw” text is the only
capability of a legacy text editor
“New School"
IDEs such as VS Code take text editing and writing
infrastructure as code to the next level. Built in linting, syntax
and code quality checks, provide immediate feedback the IDE
will proactively, automatically, help the author create quality
code. Malformed code with indentation errors is highlighted
and underlined in red directly telling the code author the code
has problems and will not compile. VS Code is highly extensible
with thousands of extensions to enhance the experience of
working with, for example, Python, adding linting for .py files.
The editor can be split into vertical and horizontal panels
allowing for direct side-by-side comparison and the ability to
work in multiple locations of the same file. VS Code also
provides previews that render raw text, such as Markdown or
CSV, directly in the editor providing developers the ability to see
what how their code renders in its final state. One of the most
important capabilities of VS Code is that it integrates with Git
version and source control. Git commands are abstracted from
the developer who can use point-and-click capabilities to work
with the version and source control system. VS Code also
integrates with Windows Subsystem for Linux (WSL) and
provides various terminals and shells including Bourne Again
Shell (bash), Windows terminal, and Ubuntu Linux shell.
Remote SSH capabilities, allowing the IDE to connect to remote
devices, can be added as an extension. Extensions for Docker
and Kubernetes exist making it easier to work with these
platforms the visually integrated IDE. Regardless if the
developer uses VS Code or another platform, an IDE is critical
and foundational to adopting NetDevOps.
Git
Git is a free, open source, and distributed version control
system created by Linus Torvalds in 2005 for development of
the Linux kernel. Consider Git the glue that holds the
NetDevOps lifecycle together tracking all changes to files using
commits. These tracked changes are included in the history of
all Git repositories and changes can be rolled back to any point-
in-time. A distributed version control system, the full repository
is cloned to a developer’s local workstation. Git is very
lightweight, portable, and integrates with VS Code. Version and
source control is a key foundational element of automating a
network with NetDevOps. Easy to learn yet extremely powerful,
Git version controls the lifecycle of pyATS jobs, tests, intent, and
template code. Git commit history makes it easy to understand
exactly what changed, under what branch, and by which
developer. Artifacts can also be rolled back to a previous point-
in-time using the commit history.
GitHub
GitHub is the largest collection of code, in the form of Git
repositories, on the Internet. GitHub, the online central
repository, should not be confused with Git, the version and
source control software. GitHub offers Internet hosting for
version and source control using Git. Both public and private
repositories are available, where private repositories require
access tokens to contribute the code. Repositories can be cloned
from GitHub locally. GitHub provides mechanisms to create
branches from the main branch and the controls to merge code
from branches into the main branch. The main branch in
NetDevOps should be considered working, valid, tested, code
representing the single source of truth. Issues with the code can
be tracked and addressed in GitHub. GitHub has many free and
paid options for CI/CD such as GitHub Actions.
GitLab
GitLab is a free, open-source, alternative to GitHub that can be
hosted privately. GitLab provides CI/CD capabilities and can be
used as a central, web-based, Git repository system. GitLab
offers functionality to collaboratively plan, build, secure, and
deploy software as a complete DevOps platform. GitLab can be
hosted on-premises or in the cloud. GitLab comes with a built-in
wiki, issue-tracking, IDE, and CI/CD pipeline features. GitLab
was originally written in Ruby but has since migrated to Go,
and offers extremely high performance as a Git repository
platform.
Structured Data
Networks historically used command-line standard output –
unstructured raw command-line output – which limited
programmability and automation. Developers had to use
regular expressions (RegEx) to tediously transform CLI output
into more programmability friendly structures. Structured data
like JavaScript Object Notation is also extremely easy to work
with using programming languages like Python. One major
feature of pyATS is the ability to model and parse commands
into structured JSON providing an easy path to
programmability and automation.
JavaScript Object Notation (JSON)
JSON is a lightweight data-interchange format that is easy for
both humans to read and write as well as machines to parse
and generate. Based on a subset of the JavaScript programming
language JSON is programming language independent. JSON is
made up of two structures: key value pairs and ordered lists,
known as an array. An object is an unordered set of key value
pairs surrounded by curly braces. Keys are followed by colons
and objects are separated by commas. One of the major features
and benefits of using pyATS is that the unstructured Cisco CLI
output can be transformed into JSON using either pyATS learn
modules or parse libraries either from the CLI or as part of
pyATS jobs. Parsers are covered in detail in Chapter five.
Interfaces could be represented by the JSON dictionary of
objects demonstrated in Example 1-1.
Example 1-1 show ip interface brief as JSON
{
"interface": {
"GigabitEthernet1": {
"interface_is_ok": "YES",
"ip_address": "10.10.20.175",
"method": "TFTP",
"protocol": "up",
"status": "up"
},
"GigabitEthernet2": {
"interface_is_ok": "YES",
"ip_address": "172.16.252.21",
"method": "TFTP",
"protocol": "up",
"status": "up"
}
}
}
eXtensible Markup Language (XML)
Extensible Markup Language uses tags, similar to Hypertext
Markup Language (HTML), to create structured data. XML
encodes data that is both human and machine readable and is
used for both storing and transmitting structured data. XML
was designed to be simple, general, and usable across the
Internet. Tags in XML represent the data structure and can also
contain metadata about the structure. The same JSON interface
example looks like Example 1-2 in XML:
Example 1-2 show ip interface brief as XML
<?xml version="1.0" encoding="UTF-8" ?>
<interface>
<GigabitEthernet1>
<interface_is_ok>YES</interface_is_ok>
<ip_address>10.10.20.175</ip_address>
<method>TFTP</method>
<protocol>up</protocol>
<status>up</status>
</GigabitEthernet1>
<GigabitEthernet2>
<interface_is_ok>YES</interface_is_ok>
<ip_address>172.16.252.21</ip_address>
<method>TFTP</method>
<protocol>up</protocol>
<status>up</status>
</GigabitEthernet2>
</interface>
YAML Ain’t Markup Language (YAML)
YAML Ain’t Markup Language (YAML), the recursive acronym,
is a superset of JSON, and all valid JSON files can be parsed with
YAML. YAML is another human and machine-readable data
serialization language but with minimal syntax. Like Python,
YAML uses whitespace indentation to indicate nesting. Much
like JSON, YAML uses key-value pair objects using colon
separation of the key and the paired value and also supports
lists, or arrays, using hyphens to indicate a list of objects. In the
context of pyATS, testbeds (Chapter four) Clean (Chapter fifteen)
and Blitz (Chapter nineteen) all require YAML of some form.
The interface JSON / XML can be expressed in YAML as shown
in Example 1-3.
Example 1-3 show ip interface brief as YAML
---
interface:
GigabitEthernet1:
interface_is_ok: 'YES'
ip_address: 10.10.20.175
method: TFTP
protocol: up
status: up
GigabitEthernet2:
interface_is_ok: 'YES'
ip_address: 172.16.252.21
method: TFTP
protocol: up
status: up
YANG
YANG, initially published in October 2010 as Request for
Comment (RFC) 6020 and superseded by RFC 7950 in August
2016, is a hierarchical data modeling language. It is used in
conjunction with network protocols like NETCONF and
RESTCONF to interact with network devices. YANG allows for
the modeling of network state, notifications, remote procedure
calls (RPCs), and configuration data in protocol-independent
XML or JSON formats. To work with YANG models, network
automation tools like pyATS can be employed, which facilitate
the interaction with YANG-modeled data using Python.
YANG, initially published in October 2010 as Request for
Comment (RFC) 6020 and superseded by RFC 7950 in August
2016, is a hierarchical data modeling language. It is used in
conjunction with network protocols like NETCONF and
RESTCONF to interact with network devices. YANG allows for
the modeling of network state, notifications, remote procedure
calls (RPCs), and configuration data in protocol-independent
XML or JSON formats. To work with YANG models, network
automation tools like pyATS can be employed, which facilitate
the interaction with YANG-modeled data using Python.
Application Programmable Interface (API)
Network engineers are familiar with interfaces, which could be
physical interfaces, console interfaces, and virtual interfaces.
An application programmable interface is exactly that – an
interface on a device that we can program. Some platforms
have NETCONF or RESTCONF APIs and other platforms have
their own dedicated API such as Cisco Nexus NX-API. Like
connecting to a virtual terminal interface (VTY interface) or
console port, network engineers can connect to an API and
interact with a device. APIs provide the ability to perform CRUD
activities; Create, Read, Updated, and Delete using various
methods in the form of verbs. These verbs include GET (read),
POST (create), PUT (update), PATCH (update), and DELETE
(delete). APIs also provide back status codes indicating the
success or failure reason an API activity resulted in:
1xx – Informational – transfer protocol level information.
2xx – Success – client request was accepted successfully.
3xx – Redirection – client must take additional action to
complete the request.
4xx – Client error – error codes that indicate there is a
problem on the client side with the sending request
5xx – Server error – error codes that indicate there is a
problem on the server side processing the request
Representational State Transfer (REST)
Simple Object Access Protocol (SOAP) and what was known as a
service-oriented architecture (SOA) was the original and
predominate API in the late 1990s into the early 2000s. SOAP is
a protocol and uses XML and RPCs to interact with web services.
In 2000, Roy Fielding’s dissertation “Architectural Styles and
Design of a Network-based Software Architecture” introduced
and outlined Representational State Transfer (REST) which is an
architectural style and not a full-blown protocol. There are
certain criteria that make an API a REST API:
A client / server architecture managed through HTTP relying
on HTTP methods like GET, POST, DELETE, etc, to perform
operations
Stateless communication:
No client information is stored between requests.
Each request is separate and not related.
High performance in component interactions resulting in
efficiency
Scalable allowing for larger numbers of components and
interactions.
Simplicity in a uniform interface
Resource identification in the request
Server could respond with HTML, XML, or JSON which are not
necessarily the server’s internal representation of state
Layered system that could involve multiple ‘hops’ including
security and load balancing which are transparent to the client
issuing the request
Today API is associated with RESTful APIs which have become
the standard in both networks and web applications typically
responding with JSON payloads. pyATS includes a connection
class implementation, REST, that allows pyATS jobs and scripts
to connect to the device via REST using the topology / YAML
format. Chapter 13, “Working with APIs,” explores pyATS and
APIs.
GraphQL
GraphQL is a user-friendly and efficient query language for
APIs that simplifies the process of fetching and manipulating
data. Imagine it as a bridge between your application and a
server, allowing you to precisely request only the data you need
and nothing more. Unlike traditional REST APIs, where each
endpoint corresponds to a fixed set of data, GraphQL enables
you to construct flexible queries, tailoring responses to your
specific requirements. It empowers developers to avoid over-
fetching and under-fetching issues, providing a more
streamlined and performant data retrieval experience. With
GraphQL, you can access multiple resources with a single
request, making it incredibly efficient for modern web and
mobile applications. Its versatility, simplicity, and ability to
adapt to various data sources have made it a popular choice for
building data-driven applications.
cURL
Client URL (cURL) is a CLI tool used to interact with APIs. cURL
is used in command lines or scripts to transfer data specified in
the URL syntax using various network protocols. cURL can be
used to confirm connectivity to a URL or more advanced API
integrations. cURL will default to an HTTP request but can be
used against HTTPS URLs as well. Downloading single or
multiple files, inspecting HTTP headers, following redirects,
transferring files with File Transfer Program (FTP), sending
cookies, using proxies, and saving the output to a file are all
supported advanced options using cURL. Example 1-4 shows a
verbose cURL request:
Example 1-4 Verbose cURL request example
$ curl -v cisco.com
* Trying 72.163.4.185:80...
* TCP_NODELAY set
* Connected to cisco.com (72.163.4.185) port 80
> GET / HTTP/1.1
> Host: cisco.com
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved permanently
< Location: https://cisco.com/
< Connection: close
< Cache-Control: no-cache
< Pragma: no-cache
<
* Closing connection 0
Postman
Postman is a GUI client platform for building, testing, and using
APIs. API interactions can be saved as a reusable request or
organized (grouped) into collections. Postman supports
variables in environments which can be used to automate
aspects of API interactions, such as saving authentication
tokens, keys, and credentials. Pre- and post-request testing can
be performed using JavaScript in Postman which also includes
a console. Working requests can quickly and easily be
transformed into working code in a large variety of
programming languages such as Python, C#, Java, and
JavaScript. Developers working with APIs often start in
Postman to develop working requests and then migrate into
their programming language of choice. Postman is beginner
friendly and provides thousands of open and free working API
examples in public collections developers can download and
install as working, pre-formed, requests into their client.
Python
Python is a free, open-source, high-level, interpreted general
purpose programming language. As the name implies, pyATS is
written in Python. Python was released in February 1991 and
has recently become popular for writing network automation
and infrastructure as code because of its low barrier for entry,
performance, and ability to work with REST APIs. pyATS allows
for CLI commands and output to be transformed into REST-like
APIs. JSON is easily written and parsed by Python making it the
best candidate programming language for interacting with
structure data. Python is described as a “batteries-included”
programming language stemming from the comprehensive
standard library of functionality provided to developers.
External libraries, known as packages, can be imported into
Python code extending the base capabilities of the language.
Object-oriented with an emphasis on code readability and
simplicity, Python is an excellent language for beginners and
advanced developers alike. Python can be run interactively,
using the Python command-line, or non-interactively, using .py
files. Non-interactive pyATS has both a job file, a control file,
and the actual .py file where the testing and Python operations
are performed. Python code logic uses significant indentation
where the whitespace, or indentation, indicates the logical flow
of code. In 1999 software engineer Tim Peters wrote a set of
nineteen guiding principles for developers known as “The Zen
of Python”:
If you enter the interactive Python CLI and enter import this
Python will print “The Zen of Python” to the screen as
demonstrated in Example 1-5!
Example 1-5 The Zen of Python
C:\>python
Python 3.8.9 (default, Apr 13 2021, 15:54:59) [
win32
Type "help", "copyright", "credits" or "license"
>>> import this
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation
There should be one-- and preferably only one --
Although that way may not be obvious at first un
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a
If the implementation is easy to explain, it may
Namespaces are one honking great idea -- let's d
p g g
>>>
pip
Package Installer for Python, or pip, is used to install packages
from the Python Package index and other indexes. Many
popular packages are stored on pypi.org where the pip install
command can install the package locally. Once a package is
installed locally its functionality can be imported into
interactive Python CLI or non-interactive .py files, and then
called by the code. Pip is preinstalled in Python as of versions
2.7.9 (for Python 2) and Python 3.4 (as pip3 for Python3) by
default. pyATS is installed using pip.
Software Development Kits (SDKs)
Some platforms provide a Python (or other programming
language) Software Development Kit (SDK) users can import
into their interactive or non-interactive Python code. Cisco
Application Centric Infrastructure (ACI) Application Policy
Infrastructure Controller (APIC) has a Python SDK called
“Cobra” that provides an abstraction in the form of Python
objects for every element in the ACI Management Information
Tree (MIT). Meraki has a similar Python SDK. pyATS itself is a
Python SDK providing testing, configuration management, REST
APIs, differential capabilities, and much more, to users as an
importable package for Python.
Virtual Environment
In order to avoid conflicts with package versions and to
sometimes setup completely different versions of Python for
development, the Python virtual environment module can be
used to create fully isolated, self-contained, directory trees that
contains a Python installation and additional packages. By
default, virtual environments will install the latest version of
Python available on a local host however the specific version of
Python can be specified at virtual environment creation time.
Virtual environments are first created using the venv package
then they are activated using a script provided by venv. Virtual
environments allow developers to create isolated environments
for creating new projects with specific Python or package
dependencies or to test existing code with new versions of
Python or dependency packages without damaging existing
working environments. It is strongly recommended to run
pyATS inside a virtual environment.
Virtual Machines
A virtual machine, or VM, is a computer that instead of running
on physical components is a logical construct that runs on a
hypervisor. There are two types of hypervisors, Type-1 called
bare metal, which is a software layer installed directly on top of
a physical server using the underlying hardware, and Type 2,
hosted hypervisors, which are installed in the operating system
as software such as Windows, Linux, or macOS. Virtual
machines can be hosted in private, public, or hybrid clouds and
are a vital part of NetDevOps where your infrastructure as code
often resides in virtual machines to be executed.
Containers
Contrary to a virtual machine, containers do not require a
hypervisor. Containers are standard units of packaged software
that contain all of the required dependencies making an
application portable for multiple environments. A container
image is the immutable static file that includes the executable
code so it can run in an isolated process. Docker is the most
popular platform for developing containers and a Docker
environment can be set up on top of a virtual or physical
machine to run containers. pyATS provides methodology to
create pyATS containers. pyATS itself can be run as a container.
xPresso, covered in depth in Chapter 20, “xPresso,” is a pyATS
scheduling platform that can also schedule and execute pyATS
containers.
Kubernetes
With the proliferation of containerized applications, a need to
orchestrate and manage these containers at scale emerged.
Kubernetes, also known as K8s, is an open-source system for
managing containerized applications. Kubernetes was
originally developed by Google but is now maintained by the
Cloud Native Computing Foundation (CNCF). Clusters of hosts
can be used by Kubernetes to orchestrate containers, as pods,
which can dynamically scale based on demand. Written in the
Go programming language, Kubernetes enables infrastructure
as a service (IaaS). Originally Kubernetes only interfaced with
Docker using a “Dockershim”, which has since been deprecated
and replaced with the Containerd interface or Container
Runtime Interface (CRI) as of May 2022.
CI/CD
Continuous Integration and Continuous Deployment (CI/CD) are
cornerstone practices in modern software development, and
they find a significant place in DevOps culture. While
traditionally associated with software development, these
principles are increasingly being adopted in the realm of
network infrastructure management to foster more reliable
and efficient operational workflows.
Continuous Integration (CI): In a network-centric
application, CI encompasses a practice where network
configurations and scripts are frequently merged into a central
repository. Following the merge, automated builds and tests are
initiated to validate the new changes against the existing
network configurations and operational states. The crux of CI in
network management is to identify and rectify conflicts or bugs
at an early stage, ensuring that the changes do not adversely
impact the network’s functionality. By integrating regularly,
teams can swiftly detect and resolve errors, making it easier to
maintain a stable network state.
Continuous Deployment (CD): Transitioning towards
network operations, CD encapsulates an automated process of
deploying validated configurations to the live network
environment. Automated testing, leveraging tools like pyATS,
validates the correctness and stability of configuration changes,
ensuring they are ready for autonomous deployment to the
production network. The objective is to maintain a network
configuration that is always deployment-ready, facilitating a
swift response to business or operational demands.
In the context of pyATS, CI/CD practices extend to testing
network configuration files and evaluating outcomes in the
infrastructure, such as verifying pre-change network states,
interface statuses, or adjacency statuses. The CI/CD pipeline can
automate the validation of network states before and after the
deployment of configuration changes, ensuring the network
operates as intended.
Incorporating CI/CD into network management establishes a
consistent and automated pathway to build, package, and test
network configurations. This uniformity in the integration and
deployment process augments the ability to catch bugs and
errors early on, thus reducing the debugging time and
significantly enhancing the efficiency of network operations.
The automation and continuous monitoring introduced by
CI/CD span the lifecycle of network configurations, from
integration and testing phases to delivery and deployment,
mirroring the benefits seen in software development realms.
This not only elevates the productivity of network operations
teams but also accelerates the rate at which reliable, high-
quality network services are delivered and maintained.
Jenkins
Jenkins is an open-source CI/CD tool enabling developers to
build, test, and deploy software. Flexible and complex
workflows can be created using Jenkins. A Git repository plugin
is available for Jenkins which can be used as not only a Git
repository, but can automate workflows that start when code is
merged into the repository. Jenkins testing can be used as part
of Continuous Integration to test and validate builds. Jenkins is
written in the Java programming language. A Jenkinsfile, a text
file that contains the definition of a Jenkins pipeline, is checked
into source control, and is used to define CI/CD in Jenkins.
Jenkinsfiles can be declarative (introduced in Jenkins Pipeline
2.5) or scripted, and are broken up into stages, such as build,
test, and deploy. Declarative Jenkins files break down stages
into individual stages which can contain multiple steps while
scripted pipelines reference the Jenkins pipeline domain
specific language within the stages without the need for steps.
GitLab CI/CD
GitLab CI/CD is a tool built into GitLab, a web-based DevOps
lifecycle tool, that offers a continuous integration and
deployment system to automate the pipeline for projects. Here’s
a brief explanation of how it works:
GitLab CI/CD Pipeline: The pipeline is the core component of
GitLab CI/CD and represents the entire process, which is
divided into multiple jobs. Each job has a specific role and
responsibility in the pipeline. Jobs are organized into stages,
and these stages execute in a particular order. Common stages
include build, test, and deploy, but you can define as many
stages as your project requires.
GitLab Runner: GitLab Runner is an application that works
with GitLab CI/CD to run jobs in your pipeline. It’s responsible
for receiving from GitLab the instructions for the jobs,
executing them, and sending the results back to GitLab.
Runners can be installed on various types of operating systems
and support multiple platforms. Runners can be installed closer
to devices under management for remote code execution and
can be deployed in a secure manner.
.gitlab-ci.yml: This is a YAML file that you create in your
project’s root. This file defines the structure and order of the
pipelines and includes the definitions of the pipeline stages and
the jobs to be executed. GitLab CI/CD looks for this file in your
repository and uses its instructions to execute jobs.
When you commit and push the code to the repository, GitLab
checks for the .gitlab-ci.yml file. If it finds the file, it triggers the
CI/CD pipeline according to the instructions defined in the file.
The jobs are then executed by the runners in the order defined
by the stages. GitLab CI/CD is a powerful tool for automating the
testing and deployment of your code. It’s highly flexible and
configurable, allowing you to tailor your pipeline to your
project’s specific needs. For example, if someone is working on
a branch rather than the mainline code, they may want to
execute a subset of tests instead of the full tests that would run
when the code is merged into the main branch.
GitHub Actions
GitHub Actions is a CI/CD system built into GitHub, the popular
web-based hosting service for Git repositories. It allows you to
automate, customize, and execute your software development
workflows right in your repository. Here’s a brief explanation of
how it works:
Workflow: In GitHub Actions, a workflow is an automated
procedure that you add to your repository. Workflows are made
up of one or more jobs and can be scheduled or triggered by
specific events. The workflow is defined in a YAML file
(main.yml or any name you prefer) in the .github/workflows
directory of your repository.
Jobs: Jobs are set of steps that execute on the same runner. By
default, a workflow with multiple jobs will run those jobs in
parallel. You can also configure jobs to depend on each other.
Steps: Steps are individual tasks that can run commands in a
job. A step can be either an action or a shell command. Each
step in a job executes on the same runner, allowing the steps in
a job to share data with each other.
Actions: Actions are the smallest portable building block of a
workflow. You can create your own actions, or use and
customize actions shared by the GitHub community. Actions are
reusable units of code that can be used across different
workflows.
Events: Workflows are triggered by events. An event can be
anything from a push to the repository, a pull request, a fork, a
release, a manual trigger by a user, and more.
Runners: Runners are servers that have the GitHub Actions
runner application installed. When you use a GitHub-hosted
runner, machine maintenance and upgrades are taken care of
for you. You can also host your own runners to run jobs on
machines you own or manage.
GitHub Actions provides a powerful, flexible way to automate
nearly any aspect of your development workflow. It’s deeply
integrated with the rest of GitHub, making it a convenient
option for projects already hosted on GitHub.
Drone
Drone is an open-source CI/CD system built on container
technology. It uses a YAML file for configuration and is known
for its simplicity and ease of use. Drone integrates seamlessly
with multiple source code management systems, including
GitHub, GitLab, and Bitbucket. Here’s a brief explanation of
how it works:
Pipeline: In Drone, a pipeline is a series of steps that are
executed in a specific order to implement the CI/CD process.
Each step in a pipeline is executed inside its own Docker
container, providing an isolated and reproducible environment
for each operation.
.drone.yml: This is the configuration file where you define
your pipeline. It’s written in YAML and should be located in the
root of your repository. The .drone.yml file specifies the steps to
be executed, the order in which they should run, and the
conditions under which they should be executed.
Steps: Steps are individual tasks that make up a pipeline. Each
step is executed in its own Docker container and can be used to
build, test, or deploy your application. Steps are defined in the
.drone.yml file and are executed in the order they appear in the
file.
Plugins: Drone has a rich ecosystem of plugins that can be
used to extend its functionality. Plugins in Drone are simply
Docker containers that are designed to perform specific tasks.
For example, there are plugins to publish Docker images,
deploy code to cloud providers, send notifications, and more.
Triggers: Drone supports various types of triggers to start the
execution of a pipeline. The most common trigger is a git push
event, but pipelines can also be triggered manually or on a
schedule.
Runners: Drone uses runners to execute pipeline tasks.
Runners are lightweight, standalone processes that run on the
host machine and execute tasks in Docker containers. Drone
supports various types of runners, including Docker, SSH, and
Kubernetes runners.
Drone is a simple, flexible, and powerful CI/CD system that
leverages the power of container technology to provide isolated
and reproducible environments for each step in your pipeline.
Its plugin-based architecture makes it highly extensible and
adaptable to a wide range of use cases.
Summary
Networks have evolved in complexity, size and scale, and
criticality to the successful operations of any business or
enterprise. The legacy waterfall methodologies, limited tools,
and general approach network engineers and operators relied
on have also evolved in recent years. By adopting NetDevOps,
networks can become “agile” not “fragile.” The principles of
“automate-first” have radically changed the way networks are
planned, designed, built, tested, and deployed. Infrastructure as
code, CI/CD, containers, clouds, and programming languages
such as Python have revolutionized networking by making it
more agile, scalable, and reliable. Collaborative culture is the
key to success as the silos and barriers between networking,
developers, and operations are taken down in favor of one team
solving problems together using their individual strengths and
experiences. pyATS is the perfect tool for NetDevOps as it is
useful in all stages of the development lifecycle from modelling,
developing, testing, documenting, and deploying configuration
to the network at scale with CI/CD integrations.
References
Part 1: Embrace NetDevOps, Say Goodbye to a “Culture of Fear”:
https://blogs.cisco.com/developer/embrace-netdevops-part-1
Part 2: NetDevOps Goes Beyond Infrastructure as Code:
https://blogs.cisco.com/developer/embrace-netdevops-part-2
Part 2: NetDevOps Goes Beyond Infrastructure as Code:
https://blogs.cisco.com/developer/embrace-netdevops-part-two
How to avoid outages: Try harder! - Uptime Institute Blog:
https://journal.uptimeinstitute.com/how-to-avoid-outages-try-
harder/
Manifesto for Agile Software Development:
https://agilemanifesto.org/
Chapter 2. Installing and Upgrading pyATS
pyATS and the pyATS library (Genie), collectively referred to as
pyATS, are both Python libraries that can be installed with the
common Python package manager, pip. Currently, pyATS
supports Python versions 3.7 - 3.10 on the following platforms:
Linux (CentOS, RHEL, Ubuntu, Alpine) and MacOS 10.13+.
Windows is not officially supported. However, Windows
provides Windows Subsystem for Linux (WSL), which allows
you to run Linux on Windows. Using the WSL environment on
Windows, you can install pyATS.
Installing pyATS is a two-part process. First, you’ll need to
install the pyATS core framework, followed by the pyATS library
(Genie), which contains all the tools needed for automated
network testing such as parsers, models, and test harness. This
can be accomplished in one, or multiple, commands. In the
following sections, you’ll see how to install and upgrade pyATS
using pip and the pyATS command-line.
Installing pyATS
Upgrading pyATS
Troubleshooting pyATS
Installing pyATS
Before installing pyATS, or any Python package for that matter,
it’s highly recommended to set up a Python virtual
environment. A Python virtual environment allows you to
install Python dependencies in an isolated environment. Python
virtual environments are considered best practice because they
allow for installation of multiple Python packages on the same
host, without worrying about having dependency conflicts.
Dependency conflicts arise when two packages have the same
dependency but require different versions of that dependency.
Another advantage to virtual environments is the ability to try
different versions of the same Python package using different
virtual environments. Once you validate the correct package
version to use, you can save the current version of the locally
installed packages to a requirements.txt file using the pip
freeze command. Python virtual environments create isolation
that avoids dependency conflicts and allows you to easily
manage the proper version of project dependencies.
Remember: If you are using Windows, please make sure to be
using a WSL environment, such as Ubuntu on WSL. PyATS does
not support Windows.
Setting up a Python Virtual Environment
Python has a module, venv, included in the standard library to
create virtual environments. Example 2-1 shows the command
to create a Python virtual environment using the venv module.
Example 2-1 Creating a Python virtual environment
dan@linux-pc# python3 | python –m venv { /path/t
Once the virtual environment is created, it needs to be
activated. Example 2-2 shows how to activate the environment
and confirm which Python interpreter is being used. In the
example, the virtual environment was created in a local
directory aptly named .venv.
Example 2-2 Activating and confirming the Python virtual
environment
dan@linux-pc# source .venv/bin/activate
(.venv)dan@linux-pc#
(.venv)dan@linux-pc# which python
~/.venv/bin/python
You can see the virtual environment’s directory name in
parentheses in the command prompt. Once you confirm your
virtual environment has been activated, you’re ready to install
pyATS!
Installing pyATS Packages
Now for the fun part! Let’s install pyATS and the pyATS library
(Genie) using one command. Example 2-3 shows the
recommended method to install both libraries. You may also
install just the pyATS core framework using pip install pyats,
but you will not have the pyATS library installed, which
provides the necessary tools to learn and parse configuration
and operational state data from network devices.
Example 2-3 Installing pyATS and pyATS library (Genie)
(.venv)dan@linux-pc# pip install pyats[library]
This will install all the core pyATS framework and the pyATS
library (Genie) packages. Example 2-4 shows how to confirm
the packages are installed using the pip list command.
Example 2-4 Confirming pyATS and pyATS library (Genie) are
installed
(.venv)dan@linux-pc# pip list
Package Version
---------------------------- --------
aiofiles 23.1.0
aiohttp 3.8.4
aiohttp-swagger 1.0.16
aiosignal 1.3.1
async-lru 2.0.3
async-timeout 4.0.2
attrs 23.1.0
bcrypt 4.0.1
certifi 2023.5.7
cffi 1.15.1
chardet 4.0.0
charset-normalizer 3.2.0
cryptography 41.0.2
dill 0.3.6
distro 1.8.0
frozenlist 1.4.0
genie 23.8
genie.libs.clean 23.8.1
genie.libs.conf 23.8
genie.libs.filetransferutils 23.8
genie.libs.health 23.8
g
genie.libs.ops 23.8
genie.libs.parser 23.8
genie.libs.sdk 23.8.1
gitdb 4.0.10
GitPython 3.1.32
grpcio 1.56.0
idna 3.4
Jinja2 3.1.2
jsonpickle 3.0.1
junit-xml 1.9
lxml 4.9.3
MarkupSafe 2.1.3
multidict 6.0.4
ncclient 0.6.13
netaddr 0.8.0
packaging 23.1
paramiko 3.2.0
pathspec 0.11.1
pip 22.0.4
prettytable 3.8.0
protobuf 4.23.4
psutil 5.9.5
pyats 23.8
pyats.aereport 23.8
pyats.aetest 23.8
pyats.async 23.8
pyats.connections 23.8
pyats.datastructures 23.8
py
pyats.easypy 23.8
pyats.kleenex 23.8
pyats.log 23.8
pyats.reporter 23.8
pyats.results 23.8
pyats.tcl 23.8
pyats.topology 23.8
pyats.utils 23.8
pycparser 2.21
pyftpdlib 1.5.7
PyNaCl 1.5.0
python-engineio 3.14.2
python-socketio 4.6.1
PyYAML 6.0.1
requests 2.31.0
ruamel.yaml 0.17.32
ruamel.yaml.clib 0.2.7
setuptools 60.10.0
six 1.16.0
smmap 5.0.0
tftpy 0.8.0
tqdm 4.65.0
typing_extensions 4.7.1
unicon 23.6.1
unicon.plugins 23.6.1
urllib3 2.0.3
wcwidth 0.2.6
wheel 0.40.0
xmltodict 0.13.0
yamllint 1.32.0
yang.connector 23.6
yarl 1.9.2
Now that pyATS has been installed, you have access to the
pyATS command line. Example 2-5 shows the available options
in the pyATS command line using –help or -h.
Example 2-5 pyATS command line options
(.venv)dan@linux-pc# pyats { --help | -h }
Usage:
pyats <command> [options]
Commands:
clean runs the provided clean
create create scripts and libra
develop Puts desired pyATS packa
diff Command to diff two snap
dnac Command to learn DNAC fe
(Prototype)
learn Command to learn device
logs command enabling log arc
migrate utilities for migrating
parse Command to parse show co
p p
run runs the provided script
results.
secret utilities for working wi
shell enter Python shell, load
pickled data
undevelop Removes desired pyATS pa
validate utilities that help to v
version commands related to vers
General Options:
-h, --help Show help
Run 'pyats <command> --help' for more informatio
A quick command to run to confirm the pyATS version installed
is pyats version check. This command shows the current
versions of your pyATS and pyATS library installations. Example
2-6 shows an example output.
Example 2-6 pyATS command line - version check
(.venv)dan@linux-pc# pyats version check
You are currently running pyATS version: 23.8
Python: 3.9.13 [64bit]
Package Version
---------------------------- -------
genie 23.8
genie.libs.clean 23.8.1
genie.libs.conf 23.8
genie.libs.filetransferutils 23.8
genie.libs.health 23.8
genie.libs.ops 23.8
genie.libs.parser 23.8
genie.libs.sdk 23.8.1
pyats 23.8
pyats.aereport 23.8
pyats.aetest 23.8
pyats.async 23.8
pyats.connections 23.8
pyats.datastructures 23.8
pyats.easypy 23.8
pyats.kleenex 23.8
pyats.log 23.8
pyats.reporter 23.8
pyats.results 23.8
pyats.tcl 23.8
pyats.topology 23.8
pyats.utils 23.8
rest.connector 23.8
unicon 23.8
unicon.plugins 23.8
yang.connector 23.8
Upgrading pyATS
Similar to how pyATS is installed, you can upgrade it, and all its
dependencies, using pip or the pyATS command line. It is
important to upgrade pyATS and the pyATS library (Genie) so
that you always get the latest bug fixes, parsers, and any
platform support that may be added to the pyATS libraries. To
upgrade using pip, you simply need to add the --upgrade flag to
the original pip install command (see Example 2-7). As a
reminder, please make sure your Python virtual environment is
activated.
Example 2-7 Upgrading pyATS Using pip
(.venv)dan@linux-pc# pip install pyats[library]
Alternatively, you may use the pyATS command line to upgrade
pyATS. In Example 2-8, pyATS is updated using the pyats
version update command. You’ll notice that pyATS is already at
the latest version, but the output provides a good example of
what you should expect to see.
Example 2-8 Upgrading pyATS Using Command Line
(.venv)dan@linux-pc# pyats version update
Checking your current environment...
The following packages will be removed:
Package Version
---------------------------- -------
genie 23.8
genie.libs.clean 23.8.1
genie.libs.conf 23.8
genie.libs.filetransferutils 23.8
genie.libs.health 23.8
genie.libs.ops 23.8
genie.libs.parser 23.8
genie.libs.sdk 23.8.1
pyats 23.8
pyats.aereport 23.8
pyats.aetest 23.8
pyats.async 23.8
pyats.connections 23.8
pyats.datastructures 23.8
pyats.easypy 23.8
pyats.kleenex 23.8
pyats.log 23.8
pyats.reporter 23.8
pyats.results 23.8
pyats.tcl 23.8
pyats.topology 23.8
py p gy
pyats.utils 23.8
rest.connector 23.8
unicon 23.8
unicon.plugins 23.8
yang.connector 23.8
Fetching package list... (it may take some time)
... and updated with:
Package Version
---------------------------- -------------
genie latest (23.9)
genie.libs.clean latest (23.9)
genie.libs.conf latest (23.9)
genie.libs.filetransferutils latest (23.9)
genie.libs.health latest (23.9)
genie.libs.ops latest (23.9)
genie.libs.parser latest (23.9)
genie.libs.sdk latest (23.9)
genie.trafficgen latest (23.9)
pyats latest (23.9)
pyats.aereport latest (23.9)
pyats.aetest latest (23.9)
pyats.async latest (23.9)
pyats.connections latest (23.9)
pyats.datastructures latest (23.9)
pyats.easypy latest (23.9)
py ypy ( )
pyats.kleenex latest (23.9)
pyats.log latest (23.9)
pyats.reporter latest (23.9)
pyats.results latest (23.9)
pyats.tcl latest (23.9)
pyats.topology latest (23.9)
pyats.utils latest (23.9)
rest.connector latest (23.9)
unicon latest (23.9)
unicon.plugins latest (23.9)
yang.connector latest (23.9)
Are you sure you want to continue [y/N]? y
Uninstalling existing packages...
Installing new packages...
Done! Enjoy!
Troubleshooting pyATS
Let’s quickly touch on some of the common pitfalls and issues
you may run into while installing pyATS and the pyATS library).
One of the most common issues is you must remember pyATS is
only compatible with MacOS, Linux, or WSL on Windows. You
can now install it on Windows. This might seem like a simple
fact, but it can be easy to forget. Another common issue is
version mismatches between pyATS and the pyATS library
(Genie). Version mismatches can cause compatibility issues
between the two libraries. To resolve this issue, simply run the
pyats version update command. This command will ensure
both libraries are up to date. Example 2-9 shows an instance
where different versions of pyATS and the pyATS library (Genie)
are installed and how to resolve the issue using the pyats
version update command.
Example 2-9 pyATS and pyATS Library Version Mismatch
(.venv)dan@linux-pc# pyats version update
Checking your current environment...
The following packages will be removed:
Package Version
---------------------------- -------
genie 23.8
genie.libs.clean 23.8.1
genie.libs.conf 23.8
genie.libs.filetransferutils 23.8
genie.libs.health 23.8
genie.libs.ops 23.8
genie.libs.parser 23.8
g p
genie.libs.sdk 23.8.1
genie.trafficgen 23.9
pyats 23.9
pyats.aereport 23.9
pyats.aetest 23.9
pyats.async 23.9
pyats.connections 23.9
pyats.datastructures 23.9
pyats.easypy 23.9
pyats.kleenex 23.9
pyats.log 23.9
pyats.reporter 23.9
pyats.results 23.9
pyats.tcl 23.9
pyats.topology 23.9
pyats.utils 23.9
rest.connector 23.8
unicon 23.9
unicon.plugins 23.9
yang.connector 23.8
Fetching package list... (it may take some time)
... and updated with:
Package Version
---------------------------- -------------
genie latest (23.9)
genie.libs.clean latest (23.9)
genie.libs.conf latest (23.9)
genie.libs.filetransferutils latest (23.9)
genie.libs.health latest (23.9)
genie.libs.ops latest (23.9)
genie.libs.parser latest (23.9)
genie.libs.sdk latest (23.9)
genie.trafficgen latest (23.9)
pyats latest (23.9)
pyats.aereport latest (23.9)
pyats.aetest latest (23.9)
pyats.async latest (23.9)
pyats.connections latest (23.9)
pyats.datastructures latest (23.9)
pyats.easypy latest (23.9)
pyats.kleenex latest (23.9)
pyats.log latest (23.9)
pyats.reporter latest (23.9)
pyats.results latest (23.9)
pyats.tcl latest (23.9)
pyats.topology latest (23.9)
pyats.utils latest (23.9)
rest.connector latest (23.9)
unicon latest (23.9)
unicon.plugins latest (23.9)
yang.connector latest (23.9)
Are you sure you want to continue [y/N]?
Summary
In this chapter, we learned how to install and upgrade pyATS. In
future chapters, we will cover optional packages that can be
installed with pyATS, such as the Robot framework. Now that
we have pyATS installed, let’s get started with adding devices to
a pyATS testbed!
Chapter 3. Testbeds
A fundamental and foundational part of pyATS is the testbed.
Testbeds can be, and are often, structured text stored in a YAML
file; but can also be dynamically created at pyATS job runtime.
Other structured text formats such as XML or JSON could be
used, but the traditional format for most testbed automation
frameworks such as pyATS and XPRESSO is YAML. Testbeds
describe the topology, devices, and even intent, and abstract the
complexity of connecting to our devices using Python. With
nothing more than a simple testbed.yaml file and pyATS
installed in a virtual Python environment network, engineers
can use the pyATS command-line interface (CLI) to interact with
the devices and topology within the testbed. In this chapter we
will explore testbeds and begin the journey into test-driven
development with pyATS.
This chapter covers the following topics:
What is YAML?
What is a testbed?
Device connection abstractions
Testbed validation
Dynamic testbeds
Intent-based networking with extended testbeds
What Is YAML?
YAML, which stands for "YAML Ain’t Markup Language" (or
sometimes "Yet Another Markup Language"), is a human-
readable data serialization format. It is often used for
configuration files and data exchange between languages with
different data structures. YAML is a superset of JSON, which
means that any valid JSON file is also a valid YAML file. Here
are some key characteristics and features of YAML:
Human-readable: YAML is designed to be easily readable by
humans. Its indentation-based structure helps in representing
hierarchical data in a clear manner.
Indentation: Unlike JSON, which uses braces ({}) and brackets
([]) to denote objects and arrays, respectively, YAML relies on
indentation (usually spaces) to represent nesting.
Scalars: YAML has support for string, integer, and floating-
point types. Strings in YAML don’t always require quotation
marks.
Data Structures: YAML supports both lists (arrays) and
associative arrays (hashes or dictionaries).
Multiline Strings: YAML provides multiple ways to represent
strings that span multiple lines.
Comments: YAML allows for comments using the # symbol.
No Explicit End Delimiter: Unlike some formats that require
an explicit end delimiter, YAML does not.
Aliases and Anchors: YAML supports referencing, which
allows for creating references to other items within a YAML
document.
Three hash marks (---): Indicate the beginning of a valid
YAML file.
Example 3-1 demonstrates an example of a simple YAML file:
Example 3-1 A simple YAML example
---
name: John Capobianco
age: 16
is_student: false
courses:
- Math
- Physics
- Chemistry
address:
street: 123 Main St
city: Ottawa
province: Ontario
This YAML snippet represents a person with some personal
details, a list of courses, and an address. The same data in JSON
would require more punctuation and might be less readable.
YAML is commonly used in various applications, including
configuration for software tools, data exchange between
languages, and as the data format for certain applications like
Ansible and Kubernetes. pyATS expects testbed data, either in
file format or dynamic creation, to be YAML. Often when people
say “Infrastructure as Code” they typically are referring to some
form of YAML depending on the context.
What Is a Testbed?
Testbeds are a core component of pyATS that describe and
abstract network topologies, devices, and links. Testbeds can be
expressed as YAML files but can also be objects in memory that
are dynamically created. A testbed in the context of pyATS
refers to the environment in which tests are executed. This
environment includes devices, servers, connections,
credentials, and other elements that are part of the network
topology. The testbed provides pyATS with all the necessary
information to connect to and interact with devices in the
network. There are many benefits to using a testbed including:
Consistency: By defining the test environment in a structured
testbed file, you ensure consistency across test runs.
Flexibility: Testbed files can be easily shared, modified, or
swapped, allowing for flexibility in testing different
environments.
Scalability: pyATS can scale from testing a few devices in a
lab to testing a large, complex production network.
The topology module is designed to provide an intuitive and
standardized method for users to define, handle and query
testbed/device/interface/link description, metadata, and their
interconnections. There are two major functionalities of the
topology module:
1. Define and describe testbed metadata using YAML,
standardizing the format of the YAML file, and load it into
corresponding testbed objects.
2. Query testbed topology, metadata, and interconnect
information via testbed object attributes and properties.
As opposed to creating a module where the topology
information is stored internally, and asking users to query that
information via API calls, pyATS topology module approached
the design from a completely different angle:
Using objects to represent real-world testbed devices
Using object attributes & properties to store testbed
information and meta-data
Using object relationships (references/pointers to other
objects) to represent topology interconnects
Using object references & python garbage collection to clean
up testbed leftovers when objects are no longer referenced.
Figure 3-1 should give a good high-level pictorial view of how
topology objects are referenced and interconnected.
Figure 3-1 Testbed Topology Objects
The testbed object is the top container object, containing all
testbed devices and all subsequent information that is generic
to the testbed. Within a testbed, links and device names must be
unique. Table 3-1 shows the complete list of testbed object
attributes that are available.
Table 3-1 Testbed Attribute List
Building a Simple Testbed
Let’s create a simple network testbed with a single device called
testbed.yaml. First, we will define devices: as the parent key
followed by our single device description as YAML. Start with
another key, nested and indented to make the YAML valid,
called csr1000v-1. Inside this device parent key, we will add the
necessary fields to allow pyATS to automatically connect and
interact with the device. We will need the following keys and
values in YAML:
alias: This is an optional field that you can use as an alias for
the hostname of the device.
type: Another optional field that can classify the device type;
such as router or switch
os: The operating system field is required and is very
important as it is used by pyATS to correctly identify the parsing
library used for this device’s operating system. Some valid
choices include ios,iosxe,nxos,asa,junos and others. For a full
list of supported operating systems please visit: Genie - Parsers
(devnetcloud.com)
platform: Another required key that also helps pyATS select
the valid parsing library this represents the hardware platform
of the device such as c9300 or isr in the case of Catalyst 9300 or
Cisco ISRs.
credentials: A device can have multiple credentials including
a default set used by pyATS to authenticate and login to devices.
It is strongly recommended to use secret strings to encrypt
your password (at a minimum). Secret strings are covered in
Appendix B - “Secret Strings”.
connections: Connections indicate the various ways pyATS
can connect to a device such as CLI or REST. Within the
connection method there are sub keys that indicate the
protocol, IP address, port, as well as arguments such as
connection timeout.
Example 3-2 demonstrates a simple testbed with a single device.
Example 3-2 A Simple Testbed Example with a Single Device
---
devices:
csr1000v-1:
alias: 'DevNet_Sandbox_CSR1000v'
type: 'router'
os: 'iosxe'
platform: isr
credentials:
default:
username: developer
password: C1sco12345
connections:
cli:
protocol: ssh
ip: sandbox-iosxe-latest-1.cisco.com
port: 22
arguments:
connection_timeout: 360
Edge Cases
A few important optional additions that can be used in special
situations should be noted. If you are connecting to legacy
devices or get the following error:
Unable to negotiate with <device ip> port 22: no
offer: diffie-hellman-group-exchange-sha1,diffie
You can add ssh_options to the CLI settings to handle SSH
options as demonstrated in Example 3-3.
Example 3-3 Adding ssh_options to Your CLI Settings
connections:
cli:
protocol: ssh
ip: sandbox-iosxe-recomm-1.cisco.com
port: 22
ssh_options: -o KexAlgorithms=+diffie-he
HostKeyAlgorithms=+ssh-rsa
arguments:
connection_timeout: 360
Another very important consideration is the initial exec
commands and initial configuration commands pyATS executes
upon successful connection to a device. By default, pyATS will
adjust certain terminal settings when it first connects to a
device. Depending on your device terminal settings, when you
connect to a device using a CLI and execute a command, you
would sometimes see “press any key to continue”. For humans,
this breakpoint gives a possibility to analyze output. However,
from automation point of view it would break parsers, as they
change output data. To avoid those, Unicon (pyATS connection
implementation) issues the following commands on connection
established:
no logging console
terminal width 511
possibly vty settings depending on implementation
All of these commands affect the terminal behavior, not your
device’s functionality. To disable default configuration in your
testbed, override the init exec and init config commands, as
demonstrated in Example 3-4.
Example 3-4 Adding Additional Arguments to Specify Not to Run
Any Intial exec or config Commands
connections:
cli:
protocol: ssh
ip: sandbox-iosxe-recomm-1.cisco.com
port: 22
arguments:
connection_timeout: 360
init_exec_commands: []
init_config_commands: []
Using the CLI is only one way of connecting to a device with a
testbed. Example 3-5 demonstrates another example, this time,
using RESTCONF.
Example 3-5 A Testbed Example Using RESTCONF
---
devices:
csr1000v-1:
alias: 'sandbox'
type: 'router'
os: 'iosxe'
platform: csr1000v
connections:
rest:
# Rest connector class
class: rest.connector.Rest
ip: sandbox-iosxe-latest-1.cisco
port: 443
credentials:
rest:
username: developer
password: C1sco12345
External Sources of Truth
If you are thinking to yourself at this point, “But I already have
all of my devices in a source of truth,” and want to avoid the
manual effort of creating a testbed for your topology there are
several approaches. If you are using Software Defined Network
(SDN) controllers or a Netbox or IPAM solution you could use
that as your source of truth to create the testbed.yaml file. First,
you could convert your source of truth into a pyATS testbed
ready comma separated values (CSV,XLS,etc) file format and use
the following steps to convert it to a valid YAML testbed file. The
pyats create testbed command automatically converts the input
and creates an equivalent YAML file. Follow these guidelines to
create a valid YAML file:
Separate the IP and port with either a space or a colon (:).
The password column is the default password used to log in to
the device.
If you leave the password blank, the system prompts you for
the password when you connect to the device.
To enter privileged EXEC mode with the enable command, add
a column with the header enable_password. The value can be
the same as or different from the default password.
Any additional columns that you define, such as platform,
alias or type, are added to the YAML file as key-value pairs.
The columns can be in any order, as long as you include the
required columns.
When creating CSV file, separate fields by comma (,). If you
need text qualifier, use double qoutes (“).
When you’re ready to create the YAML file, from your virtual
environment, run the following command:
(pyats) $ pyats create testbed file --path my_dev
Add the --encode-password option to hide the password in the
YAML file as a secret string. Note that this only obfuscates the
password—it does not make the password cryptographically
secure.
Note
One of the authors, John Capobianco, has
published a Cisco Digital Network Architecture
Center (DNAC) to testbed conversion tool you could
use if you have a DNAC as your source of truth.
automateyournetwork/dnac_pyats_testbed: Python
code that generates a pyATS Testbed from DNAC as
a source of truth (github.com)
Other sources of truth like NetBox (Home (netbox.dev)) can also
be used in a similar fashion using their API system to extract
the necessary keys and values to create a testbed file.
In pyATS, everything is an object. Like testbeds, individual
devices are also objects with their own accessible properties.
Device objects represent any piece of physical and/or virtual
hardware that constitutes an important part of a testbed
topology:
Each device may belong to a testbed (added to a Testbed
object)
Each device may host arbitrary number of interfaces
(Interface objects)
Interface names must be unique within a device
Table 3-2 outlines the complete list of testbed object attributes
that are available
Table 3-2 Device Attribute List
Interfaces are also objects. Interface objects represent any piece
of physical/virtual interface/port that connects to a link of some
sort. Eg: Ethernet, SVI, Loopback. It is important to undertstand
that:
each interface connects to a single link (Link object)
each interface should belong to a parent device (Device object)
within a parent device, each interface name needs to be
unique
Interfaces can be treated as objects, with attributes, listed in
Table 3-3.
Table 3-3 Interface Attribute List
Finally, device objects can be linked together via interface
objects using link objects. Link objects represent the connection
between two or more interfaces within a testbed topology. Note
that in the case of a link connected to more than two interfaces,
the link can also be interpreted as a Layer 2 switch:
Links may contain one or more interfaces (Interface object)
Link names within a testbed must be unique
Much like interfaces, the actual links themselves can be treated
as objects with attributes, as displayed in Table 3-4.
Table 3-4 Link Attribute List
Using these Python objects, an entire network topology could be
described using structured data in YAML as a single testbed.
We briefly mentioned Unicon as the pyATS connection
implementation. We also mentioned some important keys in the
testbed such as os and platform. Let’s take a look at Unicon
briefly and device connection abstractions.
Device Connection Abstractions
pyATS abstracts the complexity of connecting to network
devices, making it easy and accessible for those new to network
automation. Let’s take a look at the simple steps required to
connect to a device from the Python interpreter. You can run
the commands in the following examples on real devices, if you
have them available. If you don’t have a real device to practice
with, pyATS offers a mock device that you can use with most of
the pyATS Library examples.
Download the zip file that contains the mock data and YAML file
here:
https://pubhub.devnetcloud.com/media/pyats-getting-
started/docs/_downloads/04c1c0ffd3a875e85db16c7408c0f784/m
ock.zip
Extract the files to a location of your choice and keep the zip file
structure intact. This example uses the directory mock.
Activate your virtual environment (refer to Chapter 2,
“Installing and upgrading pyATS,” for reference), change to the
directory that contains the mock YAML file. The mock feature is
location-sensitive. Make sure that you change to the directory
that contains the mock.yaml file and keep the zip file structure
intact:
First, let’s assume you have the following folder structure:
my_project/
│
├── mock/
│ └── mock.yaml
│
└── venv/
my_project/ is your main project directory.
mock/ contains your mock testbed file mock.yaml.
venv/ is your Python virtual environment directory.
If you haven’t already created a virtual environment in your
project, do so with:
$ python -m venv venv
Activate the virtual environment:
On Windows:
$ venv\Scripts\activate
On macOS and Linux:
$ source venv/bin/activate
Then, you can change directories into the mock folder:
(pyats) $ cd mock
Open the Python interpreter:
(pyats) $ python
Load the pyATS Library testbed API so that you can create the
testbed and device objects:
from genie.testbed import load
Create a testbed object (tb) based on your testbed YAML file.
Specify the absolute or relative path, in this case,
mock/mock.yaml:
tb = load('mock.yaml')
Result: The system creates a variable tb that points to the
testbed object. This command also creates tb.devices, which
contains the YAML device information in the form of key-value
pairs.
Create an object (dev) for the device that you want to connect
to:
dev = tb.devices['nx-osv-1']
Result: The pyATS Library finds the device named nx-osv-1 in
tb.devices and stores the information in the dev object.
Connect using the values stored in the device object:
dev.connect()
Result: The system connects to the device and displays the
connection details. Once you’re connected, you can run show
commands and parse the output. To exit the Python interpreter:
exit()
You can put all of these commands into a single python script!
How does pyATS achieve this? Unicon is the pyATS connection
implementation that is handling the SSH connection to the
device in the above example. Unicon is a library developed by
Cisco as part of the pyATS framework. Here’s a brief overview
of Unicon and its role within the pyATS framework:
Purpose: Unicon provides a unified connectivity interface to
network devices. It abstracts the underlying connection
mechanisms (like SSH, Telnet, etc.) and provides a consistent
interface for interacting with devices, regardless of the
connection method or device type.
Device Independence: One of the main features of Unicon is
its ability to work with a wide range of network devices,
regardless of the vendor or platform. This is achieved through
plugins that cater to specific device types.
State Machine: Unicon uses a state machine model to
understand and manage the different states a device can be in
(e.g., exec mode, config mode, etc.). This allows for intelligent
interactions with the device, ensuring that commands are
executed in the appropriate context.
Ease of Use: With Unicon, users can easily establish
connections, execute commands, and retrieve results without
having to deal with the intricacies of different connection
methods or device peculiarities.
Integration with pyATS: While Unicon can be used as a
standalone library, it is tightly integrated with the pyATS
framework. This means that when you’re using pyATS for
network testing or automation, Unicon handles the device
connectivity and interactions seamlessly in the background.
Let’s delve deeper into the technical aspects of Unicon:
Architecture:
Core: The core of Unicon provides the basic building blocks
for device connectivity, including the state machine, connection
mechanisms, and basic command execution.
Plugins: The extensibility of Unicon is achieved through
plugins. Each plugin is tailored for a specific device or platform,
encapsulating the nuances and peculiarities of that device. This
allows Unicon to support a wide range of devices without
bloating the core.
State Machine:
Unicon’s state machine is a representation of the different
modes or states a device can be in (e.g., exec mode, config mode,
shell mode).
Transitions define how to move from one state to another,
often involving sending specific commands or sequences.
The state machine ensures that commands are executed in the
correct context and provides mechanisms to recover from
errors or unexpected states.
Connection Providers:
Unicon supports multiple connection methods, such as SSH,
Telnet, and console connections.
The connection provider abstracts the underlying connection
mechanism, ensuring that the user interacts with the device in
a consistent manner, regardless of the connection method.
Service Framework:
Services in Unicon are high-level operations that users might
want to perform on a device, such as executing a command,
transferring a file, or reloading the device.
Each service is implemented as a callable object, making it
easy to extend and customize.
Logging and Debugging:
Unicon provides extensive logging capabilities, capturing all
interactions with the device. This is invaluable for debugging
and understanding device behavior.
The logs can be configured to capture different levels of detail,
from high-level operations to the raw bytes sent and received.
Patterns and Dialogs:
Interacting with devices often involves recognizing specific
prompts or messages and responding appropriately. Unicon
uses regular expressions (patterns) to identify these.
Dialogs are sequences of expected patterns and responses.
They allow Unicon to handle complex interactions, such as
logging in, handling prompts, or navigating through device
menus.
Exception Handling: Unicon is designed to handle exceptions
gracefully. If an unexpected event occurs (e.g., a timeout,
unrecognized prompt, or connection drop), Unicon can attempt
to recover the session or raise a meaningful exception to the
user.
Performance: Unicon is optimized for performance, ensuring
that interactions with devices are fast and efficient. This is
especially important in large-scale network testing scenarios.
Performance is a big reason why pyATS is preferred on large
scale network topologies over other, much slower, network
automation frameworks.
Extensibility: One of the strengths of Unicon is its
extensibility. Users can easily add support for new devices,
customize existing behaviors, or add new services by extending
the core classes and leveraging the plugin architecture.
In essence, Unicon provides a robust and flexible framework
for device connectivity, abstracting the complexities and
ensuring that users can focus on their automation tasks rather
than the intricacies of device interactions.
Testbed Validation
Testbeds can easily be validated using either YAML lint or the
built-in pyATS testbed validation command. Software linting,
often simply referred to as "linting," is the process of running a
program (called a "linter") to analyze source code for potential
errors, bugs, stylistic issues, and suspicious constructs. The term
"lint" originally referred to unwanted fluff or fuzz on clothing,
and in the context of software, it refers to "unwanted" or
"suspicious" parts of the code. Here are some key points about
linting:
Static analysis: Linting is a form of static code analysis, which
means it examines the source code without executing it. This is
in contrast to dynamic analysis, which analyzes software by
executing it.
Code quality: Linters not only identify potential errors but
also enforce coding standards and styles. This helps maintain a
consistent codebase, especially in projects with multiple
contributors.
Customizability: Most linters allow users to configure which
rules to enforce, enabling teams to adopt their own coding
standards.
Integration: Linters can be integrated into the software
development workflow in various ways:
IDE/Editor integration: Many integrated development
environments (IDEs) and code editors have built-in support or
plugins for linting, providing real-time feedback as developers
write code.
Pre-commit hooks: Linters can be set up as pre-commit hooks
in version control systems, ensuring code is linted before it’s
committed.
Continuous Integration (CI): Linting can be a step in the CI
process, preventing code that doesn’t meet the linting criteria
from being merged.
Common linters: There are linters available for almost every
programming language. Some popular ones include:
ESLint for JavaScript
Pylint for Python
RuboCop for Ruby
golint for Go
TSLint (now deprecated in favor of ESLint) for TypeScript
Benefits:
Bug Detection: Linters can catch common programming
errors, such as undeclared variables, unused variables, or
mismatched parentheses.
Code Readability: By enforcing a consistent style, linters help
make code more readable for all team members.
Learning: Especially for beginners, linters can be educational,
pointing out best practices and potential pitfalls.
Limitations:
False Positives: Linters can sometimes flag code that is
technically correct but appears suspicious. It’s up to the
developer to determine whether the warning is relevant.
Not a Replacement for Testing: While linting can catch
certain types of errors, it’s not a substitute for thorough testing,
including unit tests, integration tests, and end-to-end tests.
yamllint is a Python package used to validate YAML files. To
install yamllint use pip:
$ pip install yamllint
Then you can yamllint your testbed.yaml file to check it for
errors prior to executing pyATS.
$ yamllint testbed.yaml
$
If yamllint returns “nothing”; it means your testbed.yaml file is
valid; otherwise, yamllint will display the warnings or errors
that are problems with the testbed.yaml file. An alternative to
yamllint is to use the built-in pyATS testbed validation
command. This example shows a warning indicating that the
device has no interface definitions.
$ pyats validate testbed testbed.yaml
Example 3-6 shows an example of a pyATS testbed validation
report.
Example 3-6 Testbed Validation Report
Loading testbed file: testbed.yaml
------------------------------------------------
Testbed Name:
testbed
Testbed Devices:
.
‵- - csr1000v-1 [iosxe/csr1000v]
YAML Lint Messages
---------------------------
Warning Messages
-----------------------
-Device ‘csr100v-1' has no interface definitions
It is good practice to validate your testbeds after creation or
modification in order to prevent issues with pyATS as well as to
confirm you are committing valid YAML to your code base.
There are certain situations where you will not need to
manually create a testbed.yaml file. Testbeds are objects that
follow structured YAML syntax, however they can also be built
dynamically at runtime.
Dynamic Testbeds
Testbeds are objects that follow a particular syntax and do not
necessarily need to be saved as a YAML file. The official pyATS
documentation around testbeds suggest that the YAML file is
easier and the preferred method of using testbeds however
there could be situations where you already have a source of
truth and the information required to build a testbed already
populating another source. Or you might be simply testing and
want to build the testbed object dynamically in your code.
There are cases where this might be necessary. Example 3-7
demonstrates pyATS manual testbed creation.
Example 3-7 pyATS Manual Testbed Creation
# Example
# -------
#
# creating a simple testbed topology from scra
# import testbed objects
from pyats.topology import Testbed, Device, Inte
# create your testbed
testbed = Testbed('manuallyCreatedTestbed',
alias = 'iWishThisWasYaml',
passwords = {
'tacacs': 'lab',
'enable': 'lab',
},
servers = {
'tftp': {
'name': 'my-tftp-server'
'address': '10.1.1.1',
},
})
# create your devices
device = Device('tediousProcess',
alias = 'gimmyYaml',
connections = {
'a': {
'protocol': 'telnet',
'ip': '192.168.1.1',
'port': 80
}
})
# create your interfaces
interface_a = Interface('Ethernet1/1',
type = 'ethernet',
ipv4 = '1.1.1.1')
interface_b = Interface('Ethernet1/2',
type = 'ethernet',
ipv4 = '1.1.1.2')
# create your links
link = Link('ethernet-1')
# now let's hook up everything together
# define the relationship.
device.testbed = testbed
device.add_interface(interface_a)
device.add_interface(interface_b)
interface_a.link = link
_
interface_b.link = link
Here is another real-world example from a Django project.
Django, like pyATS, is a Python framework but instead of
focusing on network automation Django focuses on web
development. In this Django project all of the required network
device information is stored in a PostgreSQL database. The
database table is loaded into Python and then we assemble the
testbed dynamically at runtime. This approach also scales and
the testbed could have 500 devices from the Django PostgreSQL
database! Example 3-8 demonstrates how to build, dynamically,
and at run time, a pyATS testbed from a PostgreSQL database:
Example 3-8 Dynamic testbed from Django PostgreSQL
from catalyst.models import Devices
def main(runtime):
# Query the database for All Devices
device_list = Devices.objects.all()
# Create Testbed
testbed = Testbed('dynamicallyCreatedTestbed
# Create Devices
for device in device_list:
testbed_device = Device(device.hostname,
alias = device.alias,
type = device.device_type,
os = device.os,
credentials = {
'default': {
'username': device.u
'password': device.p
}
},
connections = {
'cli': {
'protocol': device.p
'host': device.ip_ad
'port': device.port,
'ssh_options': devic
'arguements': {
'connection_time
}
}
})
# define the relationship.
testbed_device.testbed = testbed
Testbeds can represent more than just your topology, devices,
interfaces, and links; they can be extended to express intent.
Let’s take a look at how to extend and customize your testbeds.
Intent-based Networking with Extended Testbeds
Intent-based networking relies on a source of truth as an
absolute gold standard that describes the intended
configuration or state of an individual device or an entire
topology. Intent can be enforced through testing and
configuration management with pyATS by extending the base
testbed object to include customized keys and values. Create
another file, called intent.yaml, and include the following lines
at the top of this file that extended your original testbed.yaml
file:
---
extends: testbed.yaml
devices:
csr1000v-1:
Continue with a custom key that then describes your intent.
Example 3-9 demonstrates setting some global parameters as
well as some per-interface intentions.
Example 3-9 Extending a testbed to include intent
extends: testbed.yaml
defaults:
domain_name: "lab.devnetsandbox.local"
ntp_server: 192.168.100.100
p_
devices:
csr1000v-1:
custom:
interfaces:
GigabitEthernet1:
type: ethernet
description: "Link to ISP"
enabled: True
GigabitEthernet2:
type: ethernet
description: "Unused"
enabled: False
GigabitEthernet3:
type: ethernet
description: "Unused"
enabled: False
Loopback100:
type: ethernet
description: "Primary Loopba
enabled: True
When you run pyATS and specify the --testbed-file parameter
or include a testbed file in a job; you will now point to and
specify the intent.yaml file, while extends testbed.yaml, to
ensure your intent is loaded.
Now that you are describing your intent you can test the state
or configuration of your network device or topology and
validate the running state of the device matches your intended
configuration. Example 3-10 provides an example of pyATS test,
using @aetest decorator (which will be covered in Chapter 4,
“AETest Test Infrastructure”) that the interface description
matches the intended description:
Example 3-10 AETest Testing the Actual Interface Description
Matches the Intended Description
@aetest.test
def test_interface_description_matches_inten
for actual_interface, actual_value in
self.parsed_interfaces.info.items():
actual_desc = value.get('description
for intent_interface, intent_value i
self.device.custom.interfaces.items():
if actual_interface == intent_in
intended_desc = intent_value
if actual_desc != self.inten
self.failed("The interface description does not
We can include code that enforces our intent as well using the
.configure() device method as demonstrated in Example 3-11.
Example 3-11 Enforcing intent
if actual_desc != self.intended_desc:
self.update_interface_description()
self.failed("The interface description does
description)
def update_interface_description(self):
self.device.configure(f'''interface { self.i
description { self.int
''')
Intent-based networking using pyATS and extended testbeds
are the foundation for a modern network automation CI/CD
pipeline. Once all intent tests are passed and enforced, the
infrastructure as code artifacts are stored in a Git repository. All
future changes to the network are done using Git branches,
code reviews, testing, and pull requests that merge the code
into the code base. This process kicks off a series of automated
tasks like running the pyATS job to deploy and test the updated
intended network configuration. CI/CD pipelines will be covered
in Chapter 21, “CI/CD with pyATS.”
Summary
Testbeds are foundational to network automation with pyATS.
This chapter delved into the intricacies of network testing and
configuration, starting with an introduction to YAML. YAML,
which stands for "YAML Ain’t Markup Language," is a human-
readable data serialization format. It is often used for
configuration files and data exchange between languages with
different data structures.
Next, the chapter introduced the concept of a testbed. A testbed
is a controlled environment where network devices and
systems can be tested and validated before being deployed to
production. This environment ensures that new configurations
or software won’t adversely affect the existing network setup.
The discussion then shifted to device connection abstractions.
This section emphasized the importance of abstracting device
connections, allowing for a more streamlined and consistent
approach to connecting various devices, regardless of their
underlying differences.
Testbed validation is another crucial topic covered. It
underscores the importance of ensuring that the testbed
environment accurately represents the intended production
environment. This validation ensures that tests conducted in
the testbed will yield results relevant to the real-world scenario.
The chapter then explored the concept of dynamic testbeds.
Unlike static testbeds, dynamic testbeds can adapt, and change
based on the requirements of the tests being conducted. This
flexibility ensures that the test environment is always
optimized for the specific test scenario.
Lastly, the chapter delved into intent-based networking (IBN)
with extended testbeds. IBN is a form of network
administration that uses artificial intelligence and machine
learning to automate administrative tasks. When combined
with extended testbeds, IBN can lead to more efficient and
accurate testing scenarios, ensuring that the network’s intent
aligns with its configuration and performance.
Chapter 4. AETest Test Infrastructure
You may have heard about testing code. Code testing allows you
to verify your code produces the results you’re expecting. This
is important, as it helps minimize, not remove, bugs in your
code. Code testing also encapsulates the idea of regression
testing. In the simplest terms, regression testing ensures new
code updates do not introduce new bugs. Regression testing
becomes more important as the codebase grows. The last
concept you may hear about when it comes to testing code is
code coverage. Code coverage is the amount of code in your
codebase that is “covered” by a test. Many times, it’s assumed
that more code coverage equals less bugs; however, that’s
simply not true. Tests are only as good as they are written. If
your tests are poorly written, then no amount of code coverage
can save you from bugs.
Two of the most popular Python testing frameworks are
unittest and pytest. unittest is included in the Python standard
library and does not require any additional installation. pytest,
on the other hand, is a separate library and requires
installation. unittest and pytest are different in their own ways,
but both have the same goal of allowing developers to write
tests and verify their code is running as expected. Now
substitute the word “code” for “network”—write tests and verify
your network is running as expected. Doesn’t that sound
amazing? This defines Automation Easy Testing (AEtest)—the
testing framework for the network.
In this chapter, we will cover the following topics:
Getting Started with AEtest
Testscript Structure
AEtest Object Model
Runtime Behavior
Test Parameters
Test Results
Running Testscripts
Processors
Testscript Flow Control
Reporting
Debugging
Getting Started with AEtest
The goal of AEtest is to standardize the definition and execution
of testcases against the network. In this section, we are going to
cover the basics: Ensuring the AEtest module is installed and
reviewing the design features and core concepts of the
framework.
Installation
The AEtest module is included as part of the default pyATS
installation. To ensure the module is installed, run the pip list
command within your Python environment. In the list of
installed packages, you should see pyats.aetest listed, along
with many other pyATS modules. If you don’t see the
pyats.aetest module listed, I recommend reinstalling pyATS as
described in Chapter 2, “Installing an Upgrading pyATS.”
Design Features
AEtest drew its design from two popular Python testing tools,
unittest and pytest. If you’re familiar with either library, the
structure and design of AEtest may be familiar. With that said,
let’s review the design features of AEtest.
AEtest is built with a Pythonic object-oriented approach. This
comes from the infamous Object-Oriented Programming (OOP)
programming paradigm, where the design is centered around
classes and objects, rather than functions. From a network
perspective, think about all the individual components that
make up a network—interfaces, links, devices, etc. These are all
considered “objects” and can be implemented as such in Python
using an OOP approach. Moving on, another design feature is
using a block-based approach to test sections. We are going to
review each test section, but here’s a quick breakdown of the
approach:
Common Setup with Subsections
Testcases with setup/tests/cleanup
Common Cleanup with Subsections
Each block listed has a purpose that will be explained further in
the chapter. The next design feature is two-fold. AEtest was built
to be highly modular and extensible, which in turn, allows
testcase inheritance, dynamic testcase generation, customer
runner for testable objects, and a customizable reporter. The
last design feature is what allows the tool to scale and cover
multiple network use cases. AEtest provides enhanced looping
and testcase parametrization. Enhanced looping allows the
same test(s) to be reused with different parameters. This is huge
as looping cuts down on the need to write multiple tests that
only require slight variation. Looping and testcase
parametrization will be covered in further detail later in the
chapter. I hope going through these design features help set
expectation and created a mold for AEtest. Next, let’s look at
some of the core concepts of AEtest.
Core Concepts
The core concepts of AEtest are brief but promote boundaries of
the framework. Here are the three core concepts:
Main sections must be subdivided
Sections must be explicitly declared
Import, inspect, and run
Subdividing the main sections enhances the readability of the
code. You can also quickly identify the section that failed in
results. Imported sections (that is, testcases) must be inherited
in the script for them to be included. Inheriting an imported
section explicitly tells AEtest to include the imported testcase.
Simply importing the testcase into the script does nothing. The
last core concept (import, inspect, and run) is interesting and
requires a little more explanation. When a testcase or testscript
is imported, Python discovers the test sections (classes), which
are instantiated, and then run by AEtest. This might seem
confusing, as you probably expect any test section that is
imported, or included in a testscript, to run in the order
provided, but that is not the case. The discovery process is
similar to how pytest and unittest discovers testcases and will
be covered in more detail further in the chapter. With the
design features and core concepts covered, let’s move on to the
structure of testscripts.
Testscript Structure
Testscripts are the foundation to the AEtest test infrastructure.
Testscripts are made up of three main “containers”: Common
Setup, Testcases, and Common Cleanup. Each container is a
Python class with smaller sections (subsections, setup, test,
cleanup) that are defined as methods within the container class.
Each method is decorated with a Python decorator that
identifies the section type. Python decorators modify the
behavior of functions. They can be complex to understand, but
they work by passing a function into another function as an
argument. The returned output of the function that’s passed in
as an argument is modified, which ultimately modifies the
functions behavior. In AEtest, the different decorators help
identify the execution order and result rollup of each section
based on the decorator. We will dive into these smaller sections
later in this chapter, but let’s begin by reviewing the main
containers.
Common Setup
The Common Setup container is where pyATS connects to
testbed devices, applies base configuration for testing, and
other initialization actions. This container essentially sets the
stage for testing. Common Setup is not a required container in
your testscript, but is highly recommended. If Common Setup is
defined, it will always run first. This is due to the discovery
process performed by AEtest, which we will take a look at later
on. One key feature of Common Setup is that it’s also used to
validate any script inputs (arguments) provided to the
testscript. This allows your testscript to fail fast before going too
far into testing before realizing one of the script inputs is
incorrect. To better organize your code, Common Setup can be
broken down into multiple subsections. Each subsection should
perform a specific task. For example, one subsection for
connecting to testbed devices, another subsection for applying
base configurations, etc. The goal of the Common Setup
container is to house the code that prepares your testbed
devices for testing, whether that be connectivity, configuration,
or operational state.
Subsection
Subsections are smaller actionable sections that make up
Common Setup and Common Cleanup. Subsections can be seen
as independent, as results from a previous subsection do not
affect execution of the current subsection. The user can control
whether to skip, abort, or continue to the next subsection after
an unexpected result. The results of a subsection are rolled up
to the parent section (Common Setup/Common Cleanup).
Example 4-1 shows what a Common Setup section with two
subsections might look like in a testscript.
Example 4-1 Common Setup
from pyats import aetest
class MyCommonSetup(aetest.CommonSetup):
"""Common Setup"""
@aetest.subsection
def connect_to_devices(self):
_ _
"""Code to connect to testbed devices"""
pass
@aetest.subsection
def apply_base_config(self):
"""Code to configure devices with base/ini
pass
Testcases
The testcase container is made up of smaller tests and is the
focal point of testscripts. Testcases are designed to be self-
contained, modular, and extensible, which allows network
engineers to build a library of testcases for their network
testing needs. Testcases can have their own setup and cleanup
sections, with an arbitrary number of test sections. Each
testcase has a unique UID, which defaults to the testcase name,
that is used for result reporting and other job artifacts.
Testcases are run as they are defined in the testscript.
Setup Section
The setup section within a testcase is optional. If defined, there
can only be one setup section within a testcase, and it is
automatically run before any other sections. If the setup section
fails, all test sections within the testcase would be “blocked”
from running. The purpose of the setup section is to
configure/enable specific features being tested in a particular
testcase. The setup section result is rolled up to the parent
testcase result.
Test Section
The test section is a basic building block of testcases. Test
sections define the tests ran against the network. Each test
should test for one specific, identifiable objective—don’t try to
stuff too much logic or checks within one test section! They are
run in the order in which they are defined in the testcase. All
test results are rolled up to the parent testcase result.
Cleanup Section
The cleanup section is an optional section, like the setup
section, within a testcase. It removes all configuration and/or
features enabled during the setup and test sections of the
testcase. Whether tests passed or failed, the goal of the cleanup
section is to return the testbed devices back to the same state
they were in before the testcase. This allows the testscript to
continue executing without any lingering issues from previous
testcase manipulation. The cleanup section result is rolled up to
the parent testcase result.
Now that we have discussed testcases, and the individual
sections, let’s take a look at an example. Example 4-2 shows
code scaffolding for a testcase with a setup section, two test
sections, and a cleanup section.
Example 4-2 Testcase
from pyats import aetest
class MyTestcase(aetest.Testcase):
“””Testcase”””
@aetest.setup
def testcase_setup(self):
“””Code to setup testbed devices for t
pass
@aetest.test
def test1(self):
“””Code for first test”””
pass
@aetest.test
def test2(self):
“””Code for second test”””
pass
@aetest.cleanup
def testcase_cleanup(self):
“””Code to cleanup config on testbed d
pass
Common Cleanup
Common Cleanup is much like the cleanup section defined for
testcases, but it applies to the entire testscript. It is not required
in a testscript, but is highly recommended. The Common
Cleanup is always the last section to run, after all testcases, and
removes any configuration and environment changes that
occurred during the testscript run. You can think of it as
reversing the actions that were done in Common Setup. Like
Common Setup, Common Cleanup can be broken down into
subsections, which define specific actions. The goal of the
Common Cleanup section is to reset the state of the testbed back
to what it was before the testscript run. The Common Cleanup
result is a combined roll-up results of all of its subsections.
To wrap up, take a look at Figure 4-1, which helps visualize the
testscript structure with the different containers and their
corresponding sections.
Figure 4-1 AEtest Testscript Structure
Section Steps
Previously, we discussed how container classes (Common Setup,
Testcases, and Common Cleanup) are broken down into smaller
sections to help better organize code and the overall testing
workflow. However, we can go one step further. Steps allow you
to break down your individual test sections into more granular
actions. Steps are completely optional and should only be used
if a test is larger and it makes sense to break it down further
versus separating it out into smaller, individual tests.
Steps is a reserved parameter in the AEtest infrastructure and
must be included as a test function argument in order to be
used within the test. The Steps object is a Python context
manager and is intended to be used by the with statement.
Example 4-3 shows a simple example of a test section within a
testcase broken down into multiple steps.
Example 4-3 Steps in Test Section
from pyats import aetest
class Testcase(aetest.Testcase):
"""Testcase with steps"""
@aetest.test
def test(self, steps):
"""Code for test section and steps"""
# steps.start() begins the step
with steps.start("The first step")
print("This is step one!")
with steps.start("The second step"
print("This is step two!")
There are a couple of key points to point out from the example.
The step begins with steps.start(), which contains a description
of the step within the parentheses. The Steps object is really
implemented as two internal classes. The Steps class is
considered the base container class and allows the creation,
reporting, and handling of multiple nested steps. The Step class
inherits the base Steps class and is meant to be used as a
context manager. We can access current step information with
the variable name we set after the as keyword. For example,
steps have an “index” attribute that can be accessed in the step
via step.index. Table 4-1 shows the complete list of attributes
and properties of the Steps and Step objects.
Table 4-1 Steps and Step Attributes
By inheriting the base Steps class, the Step object is able to nest
steps which provides more granularity during testing. Example
4-4 shows nested steps and the associated output, which
expresses how the nested steps show up in the printed results.
You’ll notice the nested steps are separated from the parent step
using "".
Example 4-4 Nested Steps
from pyats import aetest
class Testcase(aetest.Testcase):
@aetest.test
def test(self, steps):
# demonstrating a step with multiple chi
with steps.start("test step 1") as step:
with step.start("test step 1 substep
pass
with step.start("test step 1 substep
with substep.start("test step 1
pass
with substep.start("test step 1
pass
The results for each step roll-up to the parent test section. Let’s
use the previous example (Example 4-4) as an example. If Step
1.b.ii was the only step to fail, the entire test section, which
includes the other four steps, would have a “Failed” result.
That’s why it’s crucial to only include related test steps within a
single test section. If one fails, the entire section is considered a
failure. The roll-up nature of test results will be covered later in
the chapter. For reporting, a testscript creates a steps report at
the end of each test section that contains steps. During runtime,
the steps report can be accessed with the report() attribute
(steps.report()). For additional detail, the details attribute can be
accessed (steps.details) which will return a list of StepDetail
objects. Each StepDetail object is a named tuple containing the
current step index, step name, and step result. The details
attribute can be useful if you plan to parse and analyze the step
results further. Steps are useful when needing to break down a
lengthy test into smaller, granular chunks.
AEtest Object Model
An object model describes the classes and objects that make up
a piece of software or system. In this section, we are going to go
through the object model of AEtest. This will get into the
implementation details of the classes that make up the testscript
and different testscript sections previously described. If you are
brand new to pyATS and Python, this section may be advanced,
but try to stick with it! There’s a lot of detail that explains how
testscripts and their individual sections are constructed and
why they behave the way they do.
TestScript Class
An AEtest testscript is a standard Python file, but without the
.py extension. The file is considered a testscript because it
imports the aetest module from pyats. Testscripts are made up
of three defined sections: Common Setup, Testcases, and
Common Cleanup. During execution, the aetest infrastructure
internally wraps the running testscript into a TestScript class
instance. The important piece of the TestScript instance is that
it stores script arguments as parameters in the instance to use
throughout testing, and all the major sections (Common Setup,
Testcases, Common Cleanup) point to the TestScript instance as
their parent during testing (that is, Testcase.parent).
Container Classes
There are three container classes in AEtest: CommonSetup,
Testcase, and CommonCleanup. Conveniently, each of these
containers was covered in detail at the beginning of this
chapter, but let’s now focus on the implementation details. All
the container classes are inherited from the TestContainer
class, which is a base class. Base classes are internal to pyATS
and will not be covered further in this book, as there is little
reason to access or manipulate base classes. The purpose of
container classes is simply to house other sections. Tests are not
defined directly in a container class. They must be included in a
test section, which is written inside of a container class. Table 4-
2 shows the attributes and properties of a container class.
Table 4-2 Container Class Attributes/Properties
To dive deeper into the technical details, container classes
instances are callable iterables. Let’s take a second to break that
down. A callable allows you to run, or “call”, the code. In
Python, functions and classes can be called, hence they can be
referred to as callables. An iterable is an object that can be
iterated or looped over. Example 4-5 shows how a Common
Setup container instance can be looped over and directly called.
Example 4-5 Container Class
from pyats import aetest
# Define a container and two subsections
class MyCommonSetup(aetest.CommonSetup):
@aetest.subsection
def subsection_one(self):
self.a = 1
print("hello world")
@aetest.subsection
def subsection_two(self):
assert self.a == 1
# Instantiate the class
common_setup = MyCommonSetup()
# Loop through to see what we get:
for i in common_setup:
print(i)
Function Classes
Function classes are housed within container classes and are
what carry out the actual tests. Function classes include the
Subsection, SetupSection, TestSection, and CleanupSection
classes. These class names may look familiar to the different
section decorators we discussed previously in Testscript
Structure. Each function class is short-lived. They are
instantiated during runtime and only live as long as the section
runs. Table 4-3 shows the attributes and properties of function
classes.
Table 4-3 Function Class Attributes/Properties
Any class method that has a section decorator is instantiated
with their corresponding function class. For example, a class
method with the decorator @aetest.test instantiates the
TestSection class. This allows the AEtest infrastructure to
manage each section’s reporting context, enables result
tracking, and other features specific to the test section methods.
Example 4-6 show the internals of each function class within a
Testcase class instance.
Example 4-6 Function Class
from pyats import aetest
class MyCommonSetup(aetest.CommonSetup):
# subsection corresponds to Subsection class
@aetest.subsection
def subsection_one(self):
pass
class MyTestcase(aetest.Testcase):
# setup corresponds to SetupSection class
@aetest.setup
def setup(self):
pass
# test corresponds to TestSection class
@aetest.test
def test_one(self):
pass
# cleanup corresponds to CleanupSection clas
@aetest.cleanup
def cleanup(self):
pass
# When container instances are iterated, the ret
# class instances
tc = MyTestcase()
for obj in tc:
print(type(obj))
print(obj.function)
# Printed results:
# <class 'pyats.aetest.sections.SetupSection'>
# <bound method MyTestcase.setup of <class 'MyTe
# <class 'pyats.aetest.sections.TestSection'>
# <bound method MyTestcase.test_one of <class 'M
# <class 'pyats.aetest.sections.CleanupSection'>
# <bound method MyTestcase.cleanup of <class 'My
Runtime Behavior
The AEtest module provides access to objects and attributes that
are only available during runtime via the runtime object. The
runtime object is available only while the testscript is
executing. Currently, only uids and groups are the two
accessible variables; however, this could change in future
releases.
The runtime object can be useful for querying and possibly
manipulating the execution flow of your testscript. For
example, you may want to ensure only certain testcases or
sections run during testing. This can be done by querying the
testcase UID and/or group. Groups will be talked about later in
the chapter, but in short, they allow you to arbitrarily label
testcases. This allows you to better organize which testcases to
run in a testscript. Example 4-7 shows how the testscript will
only run testcases that belong to the “L3” group but are not in
the “L2” group.
Example 4-7 AEtest Runtime
from pyats import aetest
from pyats.datastructures.logic import And, Not
class CommonSetup(aetest.CommonSetup):
# Allows testcases in “L3” and not in “L2” g
@aetest.subsection
def validate_l3_testcases(self):
_ _
aetest.runtime.groups = And("L3", Not("L
# Print runtime groups
print(aetest.runtime.groups)
This example reads straightforward, but I do want to touch on
the logic expressions used to differentiate the group names. You
might notice at the beginning of the example we imported the
keywords “And” and “Not” from pyats.datastructures.logic. This
is another hidden gem within pyATS. The logic module allows
you to easily produce logic testing with English keywords. The
logic module also allows us to use callables that accept
arguments to perform additional logic before returning a value
that is used for truth testing.
Self
In Python, when a class is instantiated, it has the ability to
access attributes and methods of the class through the self
keyword. Self is a convention, not a rule, in Python; however,
there aren’t many instances where you will see any other
keyword used to represent an instance of a Python class. In
AEtest, container classes (Common Setup, Testcases, and
Common Cleanup) are all Python classes. This allows you to get
and set class-level attributes during testing. For example, let’s
say you wanted to use a value from one test section in another
such as the MAC address table. You might have multiple tests
that need the results of the show mac-address table command.
Instead of running the command and collecting the results in
each individual test, you can run it once and save it as a class
attribute using self. Example 4-8 shows a testcase that executes
the show mac-address table command, collects the output, and
uses that output in a future test.
Example 4-8 Self Testcase
from pyats import aetest
class L2Testcase(aetest.Testcase):
@aetest.setup
def collect_l2_info(self, device):
# Collect MAC address table entries
self.mac_table = device.execute("show ma
@aetest.test
def confirm_mac_addresses(self):
# Confirm the "important" MAC address is
important_mac = "0123.4567.0987"
assert important_mac in self.mac_table
Obviously, this is not the best way to go about checking for a
particular MAC address in the MAC address table, but you can
clearly see how the MAC address table output is collected in the
setup section and used in a separate test section within the
testcase. Being able to get and set class attributes using self can
be a powerful tool by allowing testcases to run more efficiently
and remove redundancies.
Parent
In the beginning of the chapter, we touched on the testscript
structure and how there is the concept of parent-child object
relationships. Let’s dive a bit further into that. Besides the
TestScript class, all other classes in the AEtest object model have
a parent class. Figure 4-2 shows a graphical representation of
the parent-child relationships among the different AEtest class
objects.
Figure 4-2 Parent-Child Object Relationships
The parent attribute is accessible during testing by using
self.parent within the container/section. It’s not used often but
may be useful if you are trying to access a parent’s parameters.
For example, a Testcase class accessing the parameters assigned
to the TestScript class.
Section Ordering
Testscripts have a logical, reproducible order in which
container classes (Common Setup, Testcases, and Common
Cleanup) and sections within the container classes (setup, test,
cleanup) are discovered and executed. This allows for every
testscript to run the same, regardless of the execution
environment. Container classes execute in the following order:
CommonSetup always runs first
Testcases run in the order they appear in the script
CommonCleanup always runs last
Within a container class, there can be multiple child methods:
Setup, subsection, test, and cleanup. The child methods execute
in the following order:
Setup always runs first (if defined)
Subsection and test methods run as they appear in the script
If the parent class is inherited, the parent class’s subsection and
tests run first.
Cleanup always runs last
The execution order might seem apparent when looking at
examples in this book or the library’s documentation, but the
guaranteed ordering provides uniformity to testscript execution
and leaves nothing to chance.
Test Results
Now that we know about the structure of a testscript, all the
different containers and sections, the order they are executed,
and even the runtime behavior, let’s talk about test results. Test
results may seem straightforward – the test either passes or
fails—but that’s not all. Pass, fail, and error may work for
traditional software testing frameworks, but we are dealing
with network infrastructure. As network engineers, we all
know the engineering slogan “it depends”. To accommodate for
potential unrelated network failures, poor design, etc., AEtest
has added some additional result types, such as skipped and
errored, and exception handling to better describe and gain
better understanding of the test results. Along with test results,
understanding how the results roll-up and affect test reporting
is crucial.
Result Objects
Before we dive deeper into understand test results, let’s list the
different result objects with a short description about each:
Passed: Test was successful
Failed: Test was not successful
Aborted: Something started, but did not finish
Blocked: A test dependency was not met, and the test could
not start
Skipped: A test was not executed and omitted
Errored: A mistake or unexpected exception occurred. The
difference from Failed is that the test ran but did not meet
expectation. Errored indicates something went wrong during
testing and it could not be completed.
Passx: Short for “pass with exception”. Essentially remarks a
Failed result with Passed based on an expected exception.
Result Behavior
By default, all test results are “Passed.” A testcase could be
empty and as long as no exceptions are thrown, it will pass.
This type of behavior is standard among other Python testing
frameworks; However, when exceptions are thrown, AEtest has
the capability to catch exceptions and assign a result to the
corresponding section. AEtest can catch and handle the
following exceptions:
AssertionError: A built-in Python exception raised when an
assert statement fails. AEtest will catch this exception and
assign a Failed result to the test section.
Exception: The base Exception class in Python. All built-in
Python exceptions are derived from this class. AEtest will catch
any exception and assign an Errored result, indicating an
unhandled error to the developer. This should allow the
developer to quickly locate any unhandled errors during testing
and fix them or catch them properly.
To avoid these exceptions being caught by AEtest, you should
use try...except blocks to catch any exceptions and handle them
appropriately. For example, if you expect that a device or set of
devices will not parse a particular show command due to no
output, you should try to catch the SchemaParserEmpty
Exception and handle it by either skipping or blocking the test
from running for those devices. Otherwise, AEtest will catch the
exception for you and assign an Errored result to the test
section.
To be more granular and assign explicit results to a test section,
you simply call the result object and provide a reason for the
result. Example 4-9 shows an example of three tests marked
with Passed, Skipped, and Errored.
Example 4-9 Test Result Assignment
from pyats import aetest
class ResultTestcase(aetest.Testcase):
@aetest.test
def subsection_that_passes(self):
self.passed("This test passed!")
@aetest.test
def subsection_that_skips(self):
self.skipped("This test skipped!")
@aetest.test
def subsection_that_is_errored(self):
self.errored("This test errored!")
Along with a reason that is printed with the result, results can
also include a few more optional arguments:
Reason: Describes to the user why a test result occurred
(shown in Example 4-9)
Goto: List of sections to “go to” after this section. This is
essentially a one-way ticket to another test section in the
testscript.
From_exception: Accepts an exception object and will add the
traceback to the reason
Data: A dictionary of data relevant to the result. This data is
passed to and stored with the Reporter object for further
processing.
Once a result is determined for the current section, AEtest
moves on to the next test section. Any code that is included
after the result is determined in a test section will not be
executed.
Interaction Results
As a network engineer, you might still want or require some
control of the testing that occurs. “What if I need to move patch
cables during testing? I don’t want to sit in a testing lab all day
waiting around to pause testing, move one silly patch cable, and
click a button to continue testing.” Automation can’t solve all
our problems, but it can take them into consideration! AEtest
offers the WebInteraction class, which pauses test execution
and can notify a user (via email) that input is required via a
web page. The web page is a form for the user to submit a test
result for the test(s) that required intervention. You can even
customize the email body and HTML webpage using Jinja2
templates to really make it your own!
Result Rollup
Result roll-up is combining the results of many child sections
into one summary result. The simplest example is a testcase
with two test sections. If one of the two test sections fail, the
testcase fails. The roll-up concept is easy to understand, but can
be complex when dealing with many testcases, multiple test
sections, and adding a few steps in each test section. Table 4-4
shows a lookup table for a summary result when combining
multiple results. This table can be used to check which result
“wins” over the other. For example, let’s say you have a testcase
with three test section results: Passed, Skipped, Passx, in that
order. Starting from the top, you can compare the Passed and
Skipped results and see in the table that the summary result for
the testcase would be Passed. Next, compare the Passed and
Passx results. This is interesting because now the summary
result changes to Passx instead of Passed. This is due to the fact
that the summary result is meant to inform the user of any
negative results or exceptions caught during testing, as these
issues would need fixed. If the summary result would have
resulted in Passed, we wouldn’t have seen the test section that
passed with an exception (Passx result).
Table 4-4 Result Roll-up Table
Processors
Processors are functions that are executed before or after a
given section. Processors are optional in AEtest, but can be used
to perform helpful checks before and after testing, such as
collecting test environment information, taking snapshots,
validating section results, or executing debug commands and
collecting dump files. The possibilities are endless, as they are
simply Python functions. In the following sections, we will
cover the different processor types, including how to define and
use them in a testscript.
Processor Types
There are three types of processors:
Pre-Processors
Post-Processors
Exception-Processors
Pre-processors are executed before a given section and may be
used to take snapshots of the current environment or
determine whether a test should run. Post-processors are
executed after a section and may be used to validate the test
results or collect debug information or dump files. Exception-
processors are kicked off when an exception occurs. They may
be used to collect debug information when an exception occurs
or to suppress exceptions that are raised within a section and
assign a proper result for the section. Because these processors
are just Python functions, you can get creative with the logic
and data collected for each processor type.
Processor Definition and Arguments
All the processor types (pre/post/exception) can be applied to
test containers (Common Setup, Testcases, Common Cleanup)
and test sections (subsections, setup, test, cleanup). Each section
may have one or more processors, in the form of a list, that
execute in the order they appear. A processor can be applied to
a test container or section with the @aetest.processors
decorator. Example 4-10 shows pre, post, and exception
processors applied to a testcase with two tests.
Example 4-10 Testcase Processors
from pyats import aetest
# Print section uid
def print_uid(section):
print("current section: ", section.uid)
# Print section result
def print_result(section):
print("section result: ", section.result)
# Print the exception message and suppress the e
def print_exception_message(section, exc_type, e
print(“exception : ", exc_type, exc_value)
return True
# Use the above functions as pre/post/exception
# pre-processor : print_uid
# post-processor : print_result
# exception-processor : print_exception_messag
@aetest.processors(pre = [print_uid],
post = [print_result],
exception = [print_exception_
class Testcase(aetest.Testcase):
@aetest.test
def test(self):
print(“First test section...)
@aetest.test
def testException(self):
raise Exception("Exception raised during
Processors may have parameters propagated to them via a
datafile or parent containers/sections. Processor arguments
must have the same name as the parameters being passed in.
Some default parameters included are section, processor, and
steps. Exception processors have additional default parameters,
which include the exception type (exc_type), exception value
(exc_value), and exception traceback (exc_traceback). These
default parameters can be very powerful when determining
test section results or troubleshooting issues during testing.
Example 4-10 shows the different attributes of the section
parameter (uid and result).
Context Processors
Context processors are more advanced processors that act
similar to Python context managers in the sense that they
handle the pre, post, and exception handling processors within
a single class instead of a single Python function. When creating
a context processor, the pre-processor actions are defined in the
__enter__ method. Actions defined in the __exit__ method
handle the post- and exception-processor logic. A context
processor class has the results API available, so calling
self.failed() within the context processor class would be like
processor.failed() and fail the processor.
Global Processors
Global processors are processors that run automatically before
and after each test container and section defined in a testscript.
They do not require the @aetest.processors decorator to be
applied to each container/section. To use global processors in
your testscript, you must define a Python dictionary in your
testscript called ‘global_processors’. In this dictionary, you must
specify the following keys to represent the different processor
types: ‘pre’, ‘post’, and ‘exception’. A list of processors can be
specified as the value for each processor type key. Example 4-11
shows how global processors are defined in a testscript.
Example 4-11 Global Processors
from pyats import aetest
# Print section uid
def print_uid(section):
print("current section: ", section.uid)
# Print section result
def print_result(section):
print("section result: ", section.result)
# Print the exception message and suppress the e
def print_exception_message(section, exc_type, e
print("exception : ", exc_type, exc_value)
return True
# Use the above functions to define global pre/p
g p p
# global pre-processor : print_uid
# global post-processor : print_result
# global exception-processor : print_exception
global_processors = {
"pre": [print_uid,],
"post": [print_result,],
"exception": [print_exception_message,],
}
class Testcase(aetest.Testcase):
@aetest.test
def test(self):
print('running testcase test section')
<rest of testscript omitted for brevity>
Processor Results
Like test sections, processors have their own result and can be
marked as passed, failed, skipped, etc. The result will roll-up to
the parent object, the same as any other child result; however,
the processor result can directly affect the parent test section’s
result. Because processors have access to the section object via
the default parameters provided to processors, a processor can
alter a section’s result by calling the section results API. For
example, to fail a test section, simply call section.failed() in a
processor function. Once the processor is run, it will fail the
parent test section. For pre-processors, this will block the
execution of the test section and set the result as failed. For
post-processors, this will override the existing results of the test
section and mark it as failed.
Data-Driven Testing
AEtest testscripts and testcases are intended to be driven
dynamically by data. Dynamic data that alters and affects the
behavior of testscripts and testcases are called parameters. Test
parameters are meant to be dynamic in nature and can be
provided to testscripts in the form of input arguments or
generated during runtime. In this section, we will go over test
parameter relationships, their properties, calling parameters,
parametrization, and reserved parameters. Beyond test
parameters, we will also take a look at datafile inputs and
looping sections. If it isn’t already apparent, the focus in these
upcoming sections is how to dynamically affect the execution
and runtime behavior of your testscript. By utilizing these
concepts, your testscript organization and testing logic will
greatly mature.
Test Parameters
Parameters are variables used to access input data (arguments)
in Python functions and methods. In the context of AEtest, a
parameter will take the value of a testbed argument, which is
passed to a testscript, to instruct the testscript which testbed to
connect to for testing. If the testbed argument was not
available, a testscript would have to be hardcoded with the
testbed name, which eliminates the dynamic nature of testing
and makes it impossible to scale. This simple example, which
outlines one way to pass data to AEtest through script
arguments, allows you to understand the importance of test
parameters. In the following sections, we are going to take a
look at the relationships, properties, and ways to call different
test parameters during testing.
Parameter Relationships
Test parameters are relative to the test section. The test
parameters for a given test section are a combination of local
parameters in the section and any parent parameters. Going
back to the AEtest object model and how the container and
sections relate to one another, parameters are inherited the
same way. Figure 4-3 shows a visual of the parameter
relationship model and how parameters are inherited from
their parent container/sections.
Figure 4-3 Parameter Relationship Model
You can see how the list of overall parameters continues to
grow as you move from the TestScript parameters to the
Testcase parameters, to the TestSection parameters, so that
parameters defined at the TestScript level are made available at
the TestSection level. Another key point is that test parameters
can be overwritten. For example, param_a = 1 at the TestScript
level, was changed to param_a = 100 at the Testcase level and is
presented as such to the TestSection. This can be key if you’re
planning to implement a parameter that you expect will change
during testing. Initialize the parameter at the highest possible
level to make it available throughout testing, and alter it as
needed.
Parameter Properties
Each top-level object (TestScripts and Testcases) in AEtest have
a special parameters property that represents the different test
parameters for that particular object. The parameters property
is a Python dictionary that stores the test parameters as key-
value pairs. They can store default values for the test
parameters, which can be changed by accessing the test
parameter during runtime. Function classes such as Subsection,
SetupSection, TestSection, and CleanupSection have a
parameters property as well, but these class instances only exist
briefly during runtime, so we cannot statically set a dictionary
of default parameters for these sections. It’s recommended to
consolidate test parameters for these sections and include them
in the parent TestContainer parameters property.
Parameter Types
There isn’t an official list of parameter “types”, but it’s
important to understand the different ways in which a
parameter can be included in testing. Test parameters can be
added to a testscript as script arguments, function arguments,
or callables. Let’s take a look at each one.
Script arguments are any arguments passed directly to a
testscript before startup. This includes arguments passed by a
jobfile, command-line arguments, or even updating the
parameters property of a TestScript object within the testscript
code with a dictionary of parameters. Example 4-12 shows a
simple example of how testscript parameters can be updated
within code.
Example 4-12 TestScript Test Parameters
# The following parameters were already defined
parameters = {
"arg_a": 1,
"arg_b": 2,
}
# The following inputs were passed as arguments
script_arguments = {
"arg_a": 100,
"arg_c": 3,
}
# The TestScript parameters would be built as fo
testscript.parameters = parameters
testscript.parameters.update(script_arguments)
# Result - you'll notice that arg_a was updated
testscript.parameters
# {"arg_a": 100,
# "arg_b": 2,
# "arg_c": 3}
Parameters can also be passed as function arguments. Input
parameters passed to the testscript as script arguments can be
explicitly passed to a function as an argument. This is due to the
parent-child object model in AEtest. During runtime, all
function arguments are filled with the corresponding
parameter value, with the argument and parameter names
matching. It is preferred to explicitly pass each parameter as
function arguments, as it makes the code easier to understand
and allows you to call each function with different arguments
during testing or debugging. Example 4-13 shows how
parameters can be passed down to child containers/sections
and changed in a testscript.
Example 4-13 Parameters – Function Arguments
from pyats import aetest
# Script-level parameters
parameters = {
"param_A": 1,
"param_B": dict(),
}
class Testcase(aetest.Testcase):
# "param_B” is passed to the setup section a
@aetest.setup
def setup(self, param_B):
# param_B is a dictionary and can be cha
# Any changes are persist throughout the
param_B['new_key'] = "a key added during
# "param_A" and "param_B" are passed to the
@aetest.test
def test_one(self, param_A, param_B):
print(param_A)
# 1
print(param_B)
# {'new_key': 'a key added during setup
The last way a test parameter can be provided to a testscript is
via a callable. As mentioned earlier in the chapter, functions
and classes are considered callables. In the context of AEtest,
many times we are dealing with functions when talking about
callables. A callable parameter must evaluate to True to be
valid, which means the callable can’t return a boolean of False,
None, empty strings, or a numeric value of 0. Callables are
passed as function arguments and are “called” during runtime.
The return value of the callable is used as the actual parameter.
The one limitation to callables is that they cannot have
arguments of their own, as AEtest will not pass any arguments
to the callable.
Parameter Parametrization
Parametrized parameters are identical to callables, but allow
you to create “smarter” functions by allowing you to introduce
more dynamic parameter values. Parametrized functions are
declared using the @aetest.parameters.parametrize decorator.
Unlike normal callables, arguments can be passed to the
parametrized function with the
@aetest.parameters.parametrize decorator. Along with allowing
arguments, a special argument named section can be passed to
a parametrized function, which allows you to access the current
section object. This includes access to the current section’s
properties, such as uid and result. By having access to the
current section object, it allows you to dynamically change the
return value based on the parent section’s result, or in
combination with the test parameters available to the current
section.
Reserved Parameters
AEtest has reserved parameters that are generated during
runtime. They are not available when accessing the parameters
property but can be used to access internal objects to AEtest.
They are only accessible if their name is provided as a keyword
argument to a test section. Reserved parameters take
precedence over normal parameters if there is a normal
parameter with the same name. The purpose of reserved
parameters is to provide a mechanism for engineers to access
and dive deeper into the internals of AEtest within a testscript.
It’s highly recommended to only access the reserved
parameters if required, and to never modify a reserved
parameter as it could lead to unexpected behavior.
Datafile Input
Up to this point, testscripts are defined as static files with
multiple test containers/sections that can be altered by test
parameters. However, what if we want more flexibility and the
ability to dynamically update testscript without having to
manually change the code? Datafiles are YAML input files that
can be passed to AEtest that allow you to dynamically update
testscript parameter values. They are completely optional but
allow you to easily change testscript values without having to
modify the original testscript code. With datafiles being written
in YAML, they are easily readable, even by non-programmers,
which empowers users of pyATS to easily modify testscript
values and feel confident doing so.
Datafile inputs directly update the testscript’s module
parameters directly before runtime. Only container classes can
be updated via datafiles: CommonSetup, Testcases, and
CommonCleanup. However, due to the parent-child relationship
of these container classes to the individual test sections, it
allows the datafile values to be used in the individual test
sections. The common and testcase container classes defined in
the testscript must have matching names in the datafile. For
example, a testcase defined in the testscript as class
BGPTestcase that requires dynamic values from the datafile
must include a bgptestcase: section in the datafile. If a section
is looped, which will be discussed in the next section, only the
base class attributes are changed. If values must change on n-
iterations, you must pass those values as loop parameters. The
last key point to datafiles is that only one can be provided to a
testscript; however, you may extend another datafile. Much like
Jinja2 template inheritance, a base datafile can be extended to
create more modular datafiles for testing. The extensibility
reduces the amount of redundant datafiles that need to be
created and can help promote others to build their own
datafiles by simply extending a core datafile.
There is a defined datafile schema in the pyATS documentation,
but for brevity, Example 4-14 shows a two datafiles, base.yaml
and datafile.yaml, that can be passed to a testscript (see
Example 4-15). It’s assumed that a pyATS testbed file
(testbed.yaml) exists in the same directory.
Example 4-14 base.yaml and datafile.yaml datafiles
---
# base.yaml
# testscript parameters
parameters:
cloudflare_dns: 1.1.1.1
...
---
# datafile.yaml
extends: base.yaml
# testscript parameters
parameters:
google_dns: 8.8.8.8
# Adds to existing cloudflare DNS from base.ya
testcases:
BGPTestcase:
# testcase uid
uid: routing_test_1
# list of groups that testcase belongs to
groups: [routing]
# testcase parameters
parameters:
local_asn: 65000
remote_asn: 65001
# testcase class variable
expected_routes: 5
ExternalConnectivity:
# testcase uid
uid: ext_dns_test
# list of groups that testcase belongs to
groups: [routing]
# testcase parameters
parameters:
alt_google_dns: 8.8.4.4
...
Example 4-15 Testscript with Datafile Input
! Make into code block
import logging
from genie.utils import Dq
from pyats import aetest
from unicon.core.errors import ConnectionError
logger = logging.getLogger(__name__)
logger.setLevel("INFO")
class CommonSetup(aetest.CommonSetup):
@aetest.subsection
def connect_to_devices(self, testbed):
"""Connect to all testbed devices"""
try:
testbed.connect()
except ConnectionError:
p
self.failed(f"Could not connect to a
# Print log message confirming all devic
logger.info(f"Connected to all devices i
class BGPTestcase(aetest.Testcase):
"""Test BGP operational state"""
@aetest.test
def check_bgp_routes(self, testbed):
"""Check number of BGP neighbors equals
datafile."""
# Print all class variables (as a Python
print(f"All class variables: {vars(BGPTe
# Example Output
# {'__module__': '__main__', 'check_bgp_routes':
BGPTestcase.check_bgp_routes at 0x10bf30430>, '_
65000, 'remote_asn': 65001},
# '__doc__': None, 'source': <pyats.aetest.base.
'__uid__': 'routing_test_1', 'groups': ['routing
# 'uid': <property object at 0x108195b30>}
# Print available test parameters (provided by d
and Testcase-level parameters
print(f"Available testcase parameters: {
p p
# Example Output
# ParameterMap({'local_asn': 65000, 'remote_asn'
'1.1.1.1', 'google_dns': '8.8.8.8', 'testbed': <
0x108cfa490>})
# Parse 'show up route bgp' command outp
r1_bgp_routes = testbed.devices["cat8k-r
r2_bgp_routes = testbed.devices["cat8k-r
# Capture the number of BGP routes in the routin
library
self.r1_route_count =
(len(Dq(r1_bgp_routes).contains("routes").get_va
self.r2_route_count =
(len(Dq(r2_bgp_routes).contains("routes").get_va
# Confirm number of BGP routes equals th
provided as a class variable in the datafile
if self.r1_route_count == self.expected_
self.passed("There were the correct
router 1.")
else:
self.failed(f"Router 1 does not have
routes ({self.expected_routes}). Instead, there
routes.")
if self.r2_route_count == self.expected_
self.passed("There were the correct
p
router 2.")
else:
self.failed(f"Router 2 does not have
routes ({self.expected_routes}). Instead, there
routes.")
class ExternalConnectivity(aetest.Testcase):
"""Test external connectivity by pinging externa
Cloudflare) using pyATS device Ping API.
There are no pass/fail conditions in this testca
the use of datafile input parameters. All tests
"""
@aetest.test
def ping_cloudflare_dns(self, testbed):
"""Ping Cloudflare DNS servers"""
# Print all TestScript-level parameters - you'll
parameters are not included, as they are Testcas
# You'll also notice the addition of the 'alt_go
Testcase-level parameter
print(self.parameters)
# Example Output
# ParameterMap({'alt_google_dns': '8.8.4.4'}, {'
'google_dns': '8.8.8.8', 'testbed': <Testbed obj
# Use 'cloudflare_dns' TestScript-level paramete
(found in base.yaml)
testbed.devices["cat8k-rt1"].api.ping(se
@aetest.test
def ping_google_dns(self, testbed):
"""Ping Google DNS servers"""
# Use 'google_dns' TestScript-level parameter an
parameter to ping Google DNS servers (both found
testbed.devices["cat8k-rt1"].api.ping(se
testbed.devices["cat8k-rt1"].api.ping(se
class CommonCleanup(aetest.CommonCleanup):
@aetest.subsection
def disconnect_from_devices(self, testbed):
"""Disconnect from all devices"""
testbed.disconnect()
logger.info(f"Disconnected from all devi
if __name__ == "__main__":
from pyats.topology.loader import load
py p gy p
# Load testbed object from testbed file
tb = load("../testbed.yaml")
# Run with standalone execution
aetest.main(datafile="datafile.yaml", testbe
The datafile in Example 4-14 shows a few different values that
can be changed, including the testcase UID, groups, test
parameters, and class-level variables. The class variables are
accessible via the self keyword within the respective testcase.
For example, in the BGPTestcase class definitions, you can
access the expected_routes class variable via
self.expected_routes. It’s important to verify the number of
expected routes being received from BGP, as if there are not
enough or too many BGP routes being received, it can lead to
abnormal routing in your network. If you’re a service provider,
it can be more detrimental to your business, as it can lead to
outages across multiple customers.
Datafiles are extremely powerful and allow pyATS testscripts to
be more modular and dynamic in nature without altering any
testscript code. The only requirement is to be able to read and
update a YAML-based file. Being able to hand over the keys to
the testing framework to the engineer testing their changes
increases adaptability and confidence of test-driven network
automation.
Looping Sections
AEtest provides the ability to loop over test sections with
different parameters for each loop iteration. This is another
feature of AEtest, along with datafiles, that allows the testing
infrastructure to be dynamic. Test section code can be reused
without having to edit the code. Only certain test sections can
be looped: subsections within CommonSetup/CommonCleanup,
Testcases, and test sections within Testcases.
Defining Loops
Sections that are decorated with the @aetest.loop decorator are
marked for looping. The looping parameters are provided as
decorator arguments. During runtime, if a test section is
marked for looping, an instance of the test section is created for
each loop iteration. As a convenience, you may also use the
following decorators on the subsection and test sections,
respectively: @aetest.subsection.loop and @aetest.test.loop.
These decorators essentially combine the two decorators you
normally would have to mark each section—@aetest.
{subsection | test} and @aetest.loop.
Loop Parameters
Looping over a test section is only useful if different test
parameters are provided. These parameters are passed in as
arguments to the @aetest.loop decorator. The test parameters
are propagated to the test section as local parameters. There are
two methods to providing loop parameters:
1. Providing a list of parameters and another list of parameter
values (uses args and argvs).
2. Providing each parameter as a keyword argument and a list
of the parameter values as the value to the argument.
There isn’t a suggested method, as both methods produce the
same results, but it’s up to the specific use case on whether one
method should be used over the other. Example 4-16 shows
both methods being used on two different test sections.
Example 4-16 Looping Parameters
from pyats import aetest
class Testcase(aetest.Testcase):
# Method 1 – args and argvs – the positions
g g p
name
@aetest.test.loop(args=('a', 'b', 'c'),
argvs=((1, 2, 3),
(4, 5, 6)))
def test_one(self, a, b, c):
print("a=%s, b=%s, c=%s" % (a, b, c))
# Method 2 – keyword args – each argument in
independently
@aetest.test.loop(a=[1,4],
b=[2,5],
c=[3,6])
def test_two(self, a, b, c):
print("a=%s, b=%s, c=%s" % (a, b, c))
# OUTPUT GENERATED IF TESTCASE IS EXECUTED:
# testcase output:
# a=1, b=2, c=3
# a=4, b=5, c=6
# a=1, b=2, c=3
# a=4, b=5, c=6
#
# SECTIONS/TESTCASES
# ----------------------------------------------
# .
# `-- Testcase
# |-- test_one[a=1,b=2,c=3]
# |-- test_one[a=4,b=5,c=6]
| _
# |-- test_two[a=1,b=2,c=3]
# `-- test_two[a=4,b=5,c=6]
Along with test parameters, you may also pass in alternative
UIDs to identify each looped section. When using loop
parameters, the number of iterations depends on a couple
different factors. If alternative UIDs are provided, the number
of iterations is equal to the number of UIDs provided. If there
are more loop parameter values than UIDs, the extra values are
discarded. If there aren’t any alternative UIDs provided, the
number of iterations is equal to the number of loop parameter
values.
Loop parameters can also be a callable, iterable, or a generator.
If the argument value is a callable, the return value from the
callable is used as the loop argument value. If the argument
value is an iterable or generator, only one element is used at a
time for each loop iteration until the iterable or generator is
exhausted. Example 4-17 shows a callable (function) and
generator being used as loop parameter values.
Example 4-17 Loop Parameters: Callable and Generator
from pyats import aetest
# callable function
def my_function():
value = [1, 2, 3]
print("returning %s" % value)
return value
# generator
def my_generator():
for i in [4, 5, 6]:
print("generating %s" % i)
yield i
class Testcase(aetest.Testcase):
# creating test section with parameter "a" a
# note that the function object is passed, n
@aetest.test.loop(a=my_function)
def test_one(self, a):
print("a = %s" % a)
# creating a test section with parameter "b"
# note that the generator is a result of cal
# the function itself.
@aetest.test.loop(b=my_generator())
def test_two(self, b):
print("b = %s" % b)
# OUTPUT GENERATED IF TESTCASE IS EXECUTED:
# returning [1, 2, 3]
# a = 1
# a = 2
# a = 3
# generating 4
# b = 4
# generating 5
# b = 5
# generating 6
# b = 6
You might notice that the callable is run and the return value is
captured before the looped sections are created, while the
generator is only queried before the next section needs created.
This is important because the generator is only queried before
each test iteration, it allows a generator to dynamically
generate loop iterations based on the current test environment
instead of providing one return value before test iteration.
Dynamic Looping
Up to this point, we’ve discussed how to statically mark
different test sections for looping and provide loop parameters
in a testscript, but what if we wanted to loop a section based on
a runtime variable? For example, what if we only want to mark
a test for looping based on a certain condition or calculated
value that can only be determined during runtime. Dynamic
looping offers the ability to mark a specific test section for
looping using the loop.mark() function. Example 4-18 shows
how you can mark a test section for looping in the setup section
of a testcase.
Example 4-18 Dynamic Looping
from pyats import aetest
class Testcase(aetest.Testcase):
@aetest.setup
def setup(self):
# mark the next test for looping
# provide it with two unique test uids.
# (self.simple_test is the next test met
aetest.loop.mark(self.simple_test, uids=
# note: the simple_test section is not direc
# instead, during runtime, its testcase's se
# looping dynamically.
@aetest.test
def simple_test(self, section):
# print the current section uid
# by using the internal parameter "secti
print("current section: %s" % section.ui
# OUTPUT GENERATED IF TESTCASE IS EXECUTED:
# current section: test_one
# current section: test_two
#
# SECTIONS/TESTCASES
# ----------------------------------------------
# .
# `-- Testcase
# |-- setup
# |-- test_one
# `-- test_two
The loop.mark() function is identical to the @aetest.loop
decorator, with the exception that the first argument must be
the target test section/class. For example, to mark a BGP testcase
that uses different ASNs for each loop iteration, you would use
the following syntax in a preceding class or section:
loop.mark(BGPTestcase, asn=[65000, 65001, 65002]).
Running Testscripts
Now time for what you’ve been waiting for... running a
testscript! Testscripts can be run using one of two execution
methods: Standalone or Easypy execution. Before reviewing
each execution method, let’s dive into the AEtest Standard
Arguments and how arguments are parsed and propagated
from the command-line.
Testing Arguments
Test arguments provide a way to supplement and influence the
execution of your testscript. AEtest has a set of Standard
Arguments along with having the ability to accept arguments
from the command-line when running a testscript. In the
following sections, you’ll see the Standard Arguments provided
by AEtest and how we can use the Python argparse standard
library module to parse command-line arguments and
propagate them to individual test sections.
Standard Arguments
AEtest has a number of standard arguments, referred to as
Standard Arguments, used to influence/change testscript
execution. Standard Arguments can be provided as command-
line arguments or keyword arguments to aetest.main() in
Standalone execution or easypy.run() in Easypy execution.
Table 4-5 shows the available AEtest Standard Arguments.
Table 4-5 AEtest Standard Arguments
You may remember the datafile argument from the previous
Datafile Input section. Datafiles are provided to a testscript as a
standard argument. If you recall, datafiles provide dynamic test
parameters and updates to test sections, which influences the
behavior and execution of testscripts. That is the overall goal of
Standard Arguments, to influence the execution of testscripts.
Argument Propagation
AEtest parses and propagates all command-line arguments
using the Python standard library argparse module. The
argparse module makes it easy to write command-line
interfaces in Python. Using the argparse module, AEtest parses
the argument values stored in sys.argv, which is a list of
command-line arguments passed to a Python script. You may
wonder, what if you pass in only Standard Arguments, how are
they parsed? Here’s the process of parsing command-line
arguments:
All standard arguments are parsed and removed from the
sys.argv list
All unknown arguments, which are arguments that aren’t part
of the Standard Arguments, are parsed by sys.argv and the
argparse module.
Argument propagation allows users to pass in additional
arguments to the testscript via the command line, but a custom
argument parser must be created to use those arguments in test
sections. The custom argument parser can simply use
argparse.ArgumentParser in a Python script to parse known
arguments passed to the script. We will look at examples in
each execution method section.
Execution Environments
AEtest can execute testscripts using one of two methods:
Standalone execution or Easypy execution. Standalone
execution is meant for testing scripts and rapid development,
while Easypy execution is meant for production script where
proper reporting and log archiving is required. In the following
sections, we will go over the different execution methods and
how they are run.
Standalone Execution
Standalone execution is meant to be used during script
development and allows the user to have full control of the
execution environment, including logging and reporting. All
logging is redirected to standard output (stdout) and standard
error (stderr). Reporting is handled by the Standalone Reporter,
which tracks results and prints a summary at the end of testing
to standard output (stdout). Many of the examples in this
chapter have shown results from standalone execution. No
TaskLog, result report, or archives are generated during
standalone execution.
Testscripts are run standalone when one of the two following
methods are used to execute the script:
Directly calling aetest.main() within a user script
Indirectly calling aetest.main() by invoking Python’s
__main__ mechanism
Example 4-19 shows a testscript executed as standalone and
Example 4-20 shows the accompanying results printed to
standard output (stdout).
Example 4-19 Standalone Execution
import logging
from pyats import aetest
class CommonSetup(aetest.CommonSetup):
# Subsection 1
@aetest.subsection
def subsection_one(self):
pass
# Subsection 2
@aetest.subsection
def subsection_two(self):
pass
class Testcase(aetest.Testcase):
# Test 1
@aetest.test
def test_one(self):
pass
# Test 2
@aetest.test
def test_two(self):
pass
# Test 3
@aetest.test
def test_three(self):
pass
# add the following as the absolute last block i
if __name__ == '__main__':
# control the environment
# eg, change some log levels for debugging
logging.getLogger(__name__).setLevel(logging
logging.getLogger('pyats.aetest').setLevel(l
# aetest.main() api starts the testscript ex
# defaults to aetest.main(testable = '__main
__
aetest.main()
Example 4-20 Standalone Execution Results
+--------------------------------------------
---+
| Starting common
+-------------------------------------------
--+
+-------------------------------------------
--+
| Starting subsection s
+-------------------------------------------
--+
The result of subsection subsection_one is =
+-------------------------------------------
--+
| Starting subsection s
+-------------------------------------------
--+
The result of subsection subsection_two is =
The result of common setup is => PASSED
+-------------------------------------------
--+
| Starting testcase
+-------------------------------------------
--+
+-------------------------------------------
--+
| Starting section
+-------------------------------------------
--+
The result of section test_one is => PASSED
+-------------------------------------------
--+
| Starting section
+-------------------------------------------
--+
The result of section test_two is => PASSED
+-------------------------------------------
--+
| Starting section t
+-------------------------------------------
--+
The result of section test_three is => PASSE
The result of testcase Testcase is => PASSED
+-------------------------------------------
--+
| Detailed Res
+-------------------------------------------
--+
SECTIONS/TESTCASES
--------------------------------------------
---
.
|--
common_setup
||-- subsection_one
|`-- subsection_two
`--
Testcase
|-- test_one
|-- test_two
`-- test_three
+-------------------------------------------
--+
| Summary
|
+-------------------------------------------
--+
Number of ABORTED
0
Number of BLOCKED
0
Number of ERRORED
0
Number of FAILED
0
Number of PASSED
2
Number of PASSX
0
Number of SKIPPED
0
--------------------------------------------
---
The aetest.main() function provides the entry point and is
what starts the script execution. Standard Arguments can be
passed to aetest.main() as keyword arguments. Any other
unknown keyword arguments are propagated as script
arguments. If any unknown keyword arguments are passed as
command-line arguments, you’ll need to create a custom
argument parser. This might sound like a lot, but you may use
the argparse module to create an ArgumentParser object, add
arguments, parse the arguments, and add them to
aetest.main() as keyword arguments. Example 4-21 shows how
to pass two command-line arguments, testbed and vlan, as
keyword arguments to aetest.main(), which in turn make them
testscript parameters.
Example 4-21 Standalone Execution – Input Arguments
from pyats import aetest
class Testcase(aetest.Testcase):
# defining a test that prints out the curren
g p
# in order to demonstrate argument passing t
@aetest.test
def test(self):
print('Parameters = ', self.parameters)
# do the parsing within the __main__ block,
# and pass the parsed arguments to aetest.main()
if __name__ == '__main__':
# local imports under __main__ section
# this is done here because we don't want to
# when the script isn't run under standalone
import sys
import argparse
from pyats import topology
# creating our own parser to parse script ar
parser = argparse.ArgumentParser(description
parser.add_argument('--testbed', dest = 'tes
type = topology.loader.l
parser.add_argument('--vlan', dest = 'vlan',
# do the parsing
# always use parse_known_args, as aetest nee
# remainder arguments that this parser does
args, sys.argv[1:] = parser.parse_known_args
# and pass all arguments to aetest.main() as
p g
aetest.main(testbed = args.testbed, vlan = a
# Let's run this script with the following comma
# example_script.py --testbed /path/to/my/test
# output of the script:
#
# +-------------------------------------------
--+
# | Starting testcase
# +-------------------------------------------
--+
# +-------------------------------------------
--+
# | Starting sectio
# +-------------------------------------------
--+
# Parameters = {'testbed': <Testbed object at
# The result of section test is => PASSED
# The result of testcase Testcase is => PASSED
Standalone execution provides the user ultimate control and is
great when going through the trial-and-error process of writing
code. However, what do we do if we want to run our testscripts
in production and require proper logging and reporting?
Easypy execution provides the answer.
Easypy Execution
Easypy execution is used when testscripts are executed with the
Easypy runtime environment. With this execution method, the
Easypy runtime environment controls the environment and
provides the following features:
Run multiple testscripts together in a job file
Logging configuration is done by Easypy
TaskLog, result reporting and archives are generated
Reporter is used for reporting and result tracking, generating
a YAML result file, results details file, and a summary XML file
Example 4-22 shows an Easypy job file running two testscripts.
Each testscript run within a job file is called a task.
Example 4-22 Easypy Execution
from pyats.easypy import run
# job file needs to have a main() definition
j
# which is the primary entry point for starting
def main():
# run testscript 1
run(testscript='/path/to/your/script1.py')
# run testscript 2
run(testscript='/path/to/your/script2.py')
To run the Easypy job, you must run pyats run job jobfile-
name.py --testbed-file /path/to/testbed.yaml from the terminal.
The –testbed-file loads the testbed as a testbed object and is
propagated to the testscript as a script argument named
testbed.
If no –testbed-file is passed to “pyats run job”, the “testbed”
argument is set to None.
In addition to the –testbed-file option, all AEtest Standard
Arguments are accepted as keyword arguments and propagated
to the testscript as script arguments. Any unknown keyword
arguments provided to easypy.run() are also propagated to the
testscript as script arguments. Example 4-23 shows how
unknown keyword arguments are propogated to the testscript
as script arguments.
Example 4-23 Easypy Execution - Script Arguments
from pyats.easpy import run
def main():
run(
testscript="standalone_exec_input_args.p
pyats_is_awesome=True,
aetest_is_legendary=True
)
# Run the easypy job
# pyats run job easypy_script_args.py --testbed
# +-------------------------------------------
--+
# | Starting testcase
# +-------------------------------------------
--+
# +-------------------------------------------
--+
# | Starting sectio
# +-------------------------------------------
--+
# Parameters = {'testbed': <Testbed object at
j
# 'pyats_is_awesome': True,
# 'aetest_is_legendary': True}
# The result of section test is => PASSED
# The result of testcase Testcase is => PASSED
Along with having the ability to run multiple testscripts in a
single job, another major benefit of Easypy is the
standardization of logging, reporting, and archiving. After a job
file is run, a zipped archive folder is created in the user home
directory under .pyats/archive/YY-MM (~/.pyats/archive/YY-
MM). You can specify archives to not be created by specifying
the –no-archive option. Table 4-6 shows a list of files that are
generated by Easypy job files.
Table 4-6 Easypy Job Files
As you can see from Table 4-6, there are many different files
generated and archived from an Easypy job. The archived files
can be used for additional regression and sanity testing.
Testable
A testable in AEtest is any object that can be loaded into a
TestScript class instance by the aetest.loader module and
executed without any errors. The following are acceptable as
testables:
Any path to a Python file ending with .py
Any module name that is part of the current PYTHONPATH
Any non-built-in module objects (instances of
types.ModuleType)
Testables are not the same as testscripts. Testscripts run tests
and generate results. Testables can be meaningless modules to
AEtest, such as the urllib module. It is a valid testable but
produces zero test results.
Testscript Flow Control
AEtest provides many mechanisms to control the execution
flow of testscripts. Different mechanisms include skipping
testcases, jumping ahead in the testscript, grouping testcases, or
only executing testcases by UID.
Skip Conditions
AEtest comes with built-in preprocessors that can be used to
skip test sections, sometimes based on a condition. The
following decorators and functions can be used to skip a test
section:
@aetest.skip(reason = ’message’): Unconditionally skip the
decorated section. reason should describe why that section is
being skipped.
aetest.skip.affix(testcase, reason): Same as the skip
decorator but can be used on the fly to skip other testcases
depending on one testcase result.
@aetest.skipIf(condition, reason = ’message’): Skip the
decorated test section if condition is True.
aetest.skipIf.affix(testcase, condition, reason): It can be
used to assign skipIf decorator to the testcases, condition can be
a callable or a boolean.
@aetest.skipUnless(condition, reason = ’message’): Skip the
decorated test section unless condition is True.
aetest.skipUnless.affix(testcase, condition, reason): Can be
used on the fly to assign decorators to the testcases.
Example 4-24 shows some of these decorators and functions
used to skip testcases.
Example 4-24 AEtest Skip Conditions
from pyats import aetest
# Custom library used for testing
class mylibrary:
__version__ = 0.1
# skip testcase intentionally
@aetest.skip('because we had to')
class Testcase(aetest.Testcase):
pass
class TestcaseTwo(aetest.Testcase):
# skip test section using if library version
@aetest.skipIf(mylibrary.__version__ < 1,
'not supported in this librar
@aetest.test
def test_one(self):
pass
# skip unless library version > some number
@aetest.skipUnless(mylibrary.__version__ > 3
p y y __ __
'not supported in this li
@aetest.test
def test_two(self):
pass
@aetest.test
def test_three(self):
aetest.skip.affix(section = TestcaseTwo.
reason = "message")
aetest.skipIf.affix(section = TestcaseTw
condition = True,
reason = "message")
aetest.skipUnless.affix(section = Testca
condition = Fals
reason = "messag
@aetest.test
def test_four(self):
# will be skipped because of test_three
pass
@aetest.test
def test_five(self):
# will be skipped because of test_three
pass
@aetest.test
def test_six(self):
_
# will be skipped because of test_three
pass
class TestcaseThree(aetest.Testcase):
# will be skipped because of TestcaseTwo.tes
pass
Running Specific Testcases
You might want to run only specific testcases. To do that, you
can specify testcase UID(s) as a Standard Argument when
running the script or by setting a runtime variable
(runtime.uids) dynamically during execution. The uids
argument accepts a callable (function) that returns a truthy
value. The list of test section UIDs present in the testscript are
passed as arguments to the callable. If the callable returns True,
the respective test section is run. Logic testing can also be used
to evaluate test section UIDs. The running section UIDs are also
accessible via the runtime.uids variable during. Runtime
variables are only accessible during runtime, so the UIDs will
have to be dynamically set in the testscript versus passed in as a
Standard Argument. Example 4-25 shows how a callable can be
used to determine whether a UID should be run.
Example 4-25 Running Specific Testcases
from pyats.easypy import run
# function determining whether we should run tes
# currently executing uids is always a list of:
# [ <container uid>, <section uid>]
# eg, ['common_setup', 'subsection_one']
# thus varargs (using *) is required for the fun
def run_only_testcase_one(*uids):
# check that we are running TestcaseOne
return "TestcaseOne" in uids
# run only TestcaseOne and its contents (using c
# executing uids has TestcaseOne:
def main():
run("example_script.py", uids=run_only_testc
Testcase Grouping
Testcase grouping allows you to tag testcases that are similar in
nature and may be run together by adding them to a group. By
default, testcases do not belong to any groups. You may add
testcases to groups by adding them in the testscript itself by
assigning a list of groups to a group variable within the testcase.
Testcases can also be grouped by specifying groups in a datafile
input that is provided as a Standard Argument. Example 4-26
shows an example of assigning a group named “traffic” to
Testcase One and “sanity” to a Testcase Two.
Example 4-26 Testcase Grouping
from pyats import aetest
class TestcaseOne(aetest.Testcase):
"""Testcase One"""
groups = ["traffic"]
<TestcaseOne tests...>
class TestcaseTwo(aetest.Testcase):
"""Testcase Two"""
groups = ["sanity"]
<TestcaseTwo tests...>
Once the testcases are grouped, you can specify which testcase
groups run using Standard Arguments (--group) or using the
runtime.groups variable dynamically in your testscript. Just
like how you specify which testcases to run using their uid, you
pass a callable to the groups argument to determine whether
the group(s) should run. The callable accepts a list of each of
testcase’s group values and will return True if the testcase
group(s) should run. Logic testing can also be used to evaluate
whether a group value should run. Groups can also be
evaluated at runtime using the runtime.groups variable within
the testscript. The runtime.groups variable is dynamically set
by performing logic testing. Example 4-27 shows how to filter
certain testcase groups using a callable in Standard Arguments
and also dynamically using the runtime variable.
Example 4-27 Testcase Group Filtering
from pyats.easypy import run
# import the logic objects
from pyats.datastructures.logic import And, Not
# create a function that tests for testcase grou
# this api tests that a testcase belongs to sani
# note that varargs (using *) is required, as th
# testcase is unknown.
def non_traffic_sanities(*groups):
# Runs testcases in "sanity" group and not i
y g p
return "sanity" in groups and "traffic" not
# Runs the testscript as two tasks using differe
def main(runtime):
### Using function testing to evaluate testc
# Only runs Testcase Two
run(testscript="example_script.py", runtime=
groups=non_traffic_sanities)
### Using logic testing to evaluate testcase
# Only runs Testcase One
run("example_script.py", groups=And("sanity"
Must Pass Testcases
If there are testcases that must pass during testing, AEtest
allows you to set a class attribute called must_pass to True. If a
must pass testcase fails during testing, the testscript will
immediately jump to the Common Cleanup section, using the
goto statement, and block any remaining testcases. The goto
statement was touched on earlier in the chapter, but to recap, it
allows you to jump to another section within a testscript. The
goto target must be further in the testscript – you can’t go back
to a previously executed section. The available targets include
the testcase’s cleanup section (cleanup), the next testcase
(next_tc), the Common Cleanup section (common_cleanup), or
exiting the testscript completely (exit). Example 4-28 shows
how to use the goto statement directly and Example 4-29 shows
how to set a testcase as a must pass and what happens if that
testcase fails. You’ll notice the following testcase is blocked and
the testscript jumps to the Common Cleanup section using the
goto statement under-the-hood.
Example 4-28 Goto Statement
from pyats import aetest
class CommonSetup(aetest.CommonSetup):
@aetest.subsection
def subsection(self):
# goto with a message
self.errored('setup error, abandoning sc
# ----------------------------------------------
class TestcaseOne(aetest.Testcase):
@aetest.setup
def setup(self):
# setup failed, go to cleanup of testcas
self.failed('test failed', goto = ['clea
# ----------------------------------------------
class TestcaseTwo(aetest.Testcase):
# test failed, move onto next testcase
@aetest.test
def test(self):
self.failed(goto = ['next_tc'])
# ----------------------------------------------
class TestcaseThree(aetest.Testcase):
@aetest.setup
def setup(self):
# setup failed, move onto cleanup of thi
# jump to common_cleanup directly.
self.failed(goto=['cleanup','common_clea
Example 4-29 Must Pass Testcase
from pyats import aetest
class TestcaseOne(aetest.Testcase):
must_pass = True
@aetest.test
def test(self):
self.failed('boom!')
class TestcaseTwo(aetest.Testcase):
pass
class CommonCleanup(aetest.CommonCleanup):
@aetest.subsection
def subsection(self):
pass
# output result
#
# SECTIONS/TESTCASES
# ----------------------------------------------
# .
# |-- TestcaseOne
# | `-- test
# |-- TestcaseTwo
# `-- common_cleanup
# `-- subsection
Testcase Randomization
By default, AEtest runs the Common Setup section first, each
testcase in the order they are defined, and wrap up with the
Common Cleanup section. Testcase execution can be
randomized by setting the random standard argument to True
(random=True). Common Setup and Common Cleanup are not
randomized and will always be executed first and last
respectively. Example 4-30 shows a basic example of
randomizing testcases.
Example 4-30 Testcase Randomization
from pyats import aetest
# define a couple testcases
class TestcaseOne(aetest.Testcase):
pass
class TestcaseTwo(aetest.Testcase):
pass
class TestcaseThree(aetest.Testcase):
pass
if __name__ == "__main__":
aetest.main(random = True)
# output result
#
# SECTIONS/TESTCASES
# ----------------------------------------------
# .
# |-- TestcaseTwo
# |-- TestcaseOne
# `-- TestcaseThree
Maximum Failures
Let’s say you are testing many network features and have a
long-running testscript. By default, AEtest will run each testcase
sequentially and record the respective result. However, if
testcases begin to fail, wouldn’t you want the ability to stop
testing and figure out what’s going on without waiting for the
testscript to finish executing? AEtest provides a method to set a
maximum threshold for testcase failures during a testscript
run. The max_failures standard argument can provide the
number of testcase failures before aborting the rest of the
testcases and jumping to the Common Cleanup section of the
testscript. The goto statement is used once again to jump to the
Common Cleanup section. Example 4-31 shows how when one
testcase fails, the testscript blocks execution of the other
testcases and jumps to the Common Cleanup section before
exiting.
Example 4-31 Maximum Failures
from pyats import aetest
class TestcaseOne(aetest.Testcase):
@aetest.test
def test(self):
self.failed()
class TestcaseTwo(aetest.Testcase):
@aetest.test
def test(self):
self.failed()
class TestcaseThree(aetest.Testcase):
pass
class CommonCleanup(aetest.CommonCleanup):
pass
# set max failure to 1 and run the testscript
if __name__ == "__main__":
aetest.main(max_failures = 1)
# output result
#
# Max failure reached: aborting script execution
#
# SECTIONS/TESTCASES
# ----------------------------------------------
# .
# |-- TestcaseOne
# |-- TestcaseTwo
# |-- TestcaseThree
# `-- common_cleanup
Custom Testcase Discovery
Customizing testcase discovery is an advanced topic but should
be covered at a high-level. Testcases are discovered using the
ScriptDiscovery class in the discover module of the AEtest
package. You can customize the testcase discovery process at
the following levels: Script discovery, testcase discovery, and
common discovery. The ScriptDiscovery class finds the testcases
within a testscript, the TestDiscovery class finds the test sections
(setup, test, cleanup) within a testcase, and the
CommonDiscovery class finds subsections within the common
sections (CommonSetup and CommonCleanup). To override the
discovery process at each level, you can create a new class that
inherits the respective default discovery class. The new
discovery class must have specific methods to enable the
custom discovery logic. The runtime.discoverer properties can
be configured in the testscript to use the new discovery classes
instead of the default classes. Along with custom discovery, you
can also customize the ordering of sections. If you would like to
customize the discovery or ordering of sections, it’s
recommended to reference the pyATS documentation for more
details.
Reporting
AEtest provides reporting of testscript results including which
tests ran during testing and their associated results. The format
and level of details in the report depend on the execution mode
used for testing (Standalone or Easypy execution). In the
following sections, you’ll see the different report options
available and dive into the reporting details of each execution
mode.
Standalone Reporter
The Standalone Reporter is used when testscripts are run
directly from the command-line using standalone execution
(via aetest.main()). Testcase, section, and step results are
printed to standard output (stdout) in a tree-like format. All
examples in this chapter, and most examples in this book, use
the Standalone Reporter to showcase testscript results. Example
4-32 shows testscript results presented by the Standalone
Reporter.
Example 4-32 Standalone Reporter
+-----------------------------------------------
| Detailed Results
+-----------------------------------------------
SECTIONS/TESTCASES
------------------------------------------------
.
|-- common_setup
| |-- sample_subsection_1
| `-- sample_subsection_2
|-- tc_one
| |-- prepare_testcase
| |-- simple_test_1
| |-- simple_test_2
| `-- clean_testcase
|-- TestcaseWithSteps
| |-- setup
| | |-- Step 1: this is a description of the
| | `-- Step 2: another step
| |-- step_continue_on_failure_and_assertions
| | |-- Step 1: assertion errors -> Failed
| | `-- Step 2: allowed to continue executin
| | p
| |-- steps_errors_exits_immediately
| | `-- Step 1: exceptions causes all steps
| `-- steps_with_child_steps
| |-- Step 1: test step one
| |-- Step 1.1: substep one
| |-- Step 1.1.1: subsubstep one
| |-- Step 1.1.1.1: subsubsubstep one
| |-- Step 1.1.1.1.1: running out of inden
| |-- Step 1.1.1.1.1.1: definitely gone to
| |-- Step 1.2: substep two
| |-- Step 2: test step two
| |-- Step 2.1: function step one
| |-- Step 2.2: function step two
| `-- Step 2.3: function step three
`-- common_cleanup
`-- clean_everything
AEtest Reporter
The AEtest reporter, known as the Reporter, is used when
testscripts are executed via Easpy execution mode. The
Reporter creates a package of test result artifacts. It contains
information such as the section hierarchy, section results, and
even the amount of time each section took during testing. The
main files in the package are results.json and results.yaml,
which contains hierarchical information about the job. The top-
level is TestSuite, which contains high-level information about
the entire job. Under TestSuite is the Task level. The Task level
represents each testscript that is executed in the job. If you
remember, in an Easypy job, multiple testscripts can be
executed. Each testscript executed is called a task. As you can
imagine, below each Task are the different container classes—
Common Setup, Testcase, and Common Cleanup. Following the
AEtest section hierarchy, each container class has child sections
including SetupSection, TestSection, CleanupSection, and
Subsection. Optionally, these child sections can contain steps
represented as Step. Each level to the report has information
relevant to that section. Example 4-33 shows the different fields
for each level represented in the report.
Example 4-33 AEtest Report Structure
+---------------+-------------------------------
| Section | Field | Description
+===============+=================+=============
| TestSuite | type | Identifier t
| | | TestSuite
| | id | Unique ID fo
| | name | Name from jo
| | starttime | Timestamp wh
| | stoptime | Timestamp wh
| | p | p
| | runtime | Duration of
| | cli | Command that
| | jobfile | Location of
| | jobfile_hash | SHA256 hash
| | pyatspath | Python envir
| | pyatsversion | Version of p
| | host | Name of host
| | submitter | User that st
| | archivefile | Path to gene
| | summary | Combined sum
| | details | Details abou
| | extra | Map of extra
| | tasks | List of chil
+---------------+-----------------+-------------
| Task | type | Identifier t
| | id | Unique ID fo
| | name | Name of Test
| | starttime | Timestamp wh
| | stoptime | Timestamp wh
| | runtime | Duration of
| | description | Description
| | logfile | Path to logf
| | testscript | Path to test
| | testscript_hash | SHA256 hash
| | datafile | Path to the
| | datafile_hash | SHA256 hash
| | parameters | Any paramete
| | summary | Summary of r
| | y | y
| | details | Details abou
| | extra | Map of extra
| | sections | List of chil
+---------------+-----------------+-------------
| Section | type | Specific typ
| | | represented
| | id | Unique ID fo
| | name | Name of this
| | starttime | Timestamp wh
| | stoptime | Timestamp wh
| | runtime | Duration of
| | description | Description
| | xref | XReference t
| | | Section
| | source_hash | SHA256 hash
| | | this Secti
| | data_hash | SHA256 hash
| | logs | Path to logf
| | | this Sectio
| | | byte and si
| | parameters | Any paramete
| | processors | Lists of pro
| | | section, bo
| | result | The test res
| | details | Details abou
| | extra | Map of extra
| | sections | Any child se
| | | (Testcases
| | |
| | | have Steps
+---------------+-----------------+-------------
All levels below the TestSuite level are considered Sections, as
the information gathered from each level is about the same.
The type identifies the section type. If there are any unique
differences between sections, they will be saved under the
extras key. Example 4-34 shows an abbreviated results.yaml file
that shows the different levels to the report—TestSuite, Task,
and the Common Setup section of the first task. The section
types are highlighted to help identify the different levels.
Example 4-34 results.yaml
version: '2'
report:
type: TestSuite
id: example_job.2019Sep19_19:56:06.569499
name: example_job
starttime: 2019-09-19 19:56:07.603283
stoptime: 2019-09-19 19:56:19.951458
runtime: 12.35
cli: pyats run job job/example_job.py --testbe
--no-mail
jobfile: /Users/user/examples/comprehensive/jo
j p p j
jobfile_hash: 2a452a8683f4f5e5c146d62c78a9a525
pyatspath: /Users/user/env
pyatsversion: '19.11'
host: HOSTNAME
submitter: user
archivefile: /Users/user/env/users/user/archiv
09/example_job.2019Sep19_19:56:06.569499.zip
summary:
passed: 13
passx: 0
failed: 1
errored: 12
aborted: 0
blocked: 4
skipped: 0
total: 30
success_rate: 43.33
extra:
testbed: example_testbed
tasks:
- type: Task
id: Task-1
name: base_example
starttime: 2019-09-19 19:56:08.432390
stoptime: 2019-09-19 19:56:08.617640
runtime: 0.19
description: |+
base_example.py
_ p py
This is a comprehensive example base scr
AEtest
infrastructure features, what they are f
impacts
their testing, etc.
logfile: TaskLog.Task-1
testscript: /Users/user/examples/comprehen
testscript_hash:
2938f2d2efbf9be144a9fe68667dd1c12753b84017a56e7d
parameters:
labels: {}
links: []
parameter_A: jobfile value A
routers: []
testbed: <pyats.topology.testbed.Testbed
tgns: []
summary:
passed: 3
passx: 0
failed: 0
errored: 3
aborted: 0
blocked: 0
skipped: 0
total: 6
success_rate: 50.0
_
sections:
- type: CommonSetup
id: common_setup
name: common_setup
starttime: 2019-09-19 19:56:08.434411
stoptime: 2019-09-19 19:56:08.458939
runtime: 0.02
description: |+
Common Setup Section
This is the docstring for your c
document
the number of common setup subse
block of
comments, it gives a generic fee
built and run.
xref:
file: /Users/user/examples/comprehen
line: 191
source_hash:
c366a269e45838deb9bed54d28fef648b921c4f19a1753fc
logs:
begin: 0
file: TaskLog.Task-1
size: 4317
parameters:
labels: {}
links: []
parameter_A: jobfile value A
parameter_B: value B
routers: []
testbed: <pyats.topology.testbed.Tes
tgns: []
result:
value: passed
Along with the results.yaml and results.json files, the Reporter
also generates XML files named ResultsDetails.xml and
ResultsSummary.xml for the aggregated results. The AEtest
Reporter also provides the ability to subscribe to live result
updates. The Reporter uses a unix-socket client-server model to
collect information about each section during the job run. The
Reporter Client can subscribe to the Reporter Server for live
updates and each section. The subscribe functionality only
works as an async function and you should be familiar with the
Python asyncio library
(https://docs.python.org/3/library/asyncio.html) before
proceeding with testing this feature. The asyncio library is part
of the Python standard library and is used to write concurrent
code using the async/await syntax and is well-suited for IO-
bound tasks. The client subscribes to the server and runs a
callback each time event data is received. Table 4-7 shows the
different values that can be extracted from event data.
Table 4-7 Reporter Event Data
The last interesting piece of information the Reporter collects
and adds to the report package is git information. Git
information including the repo, file, branch, and commit hash
are added to the report. This can be helpful for regression
testing. Let’s say the testsuite, or part of the testsuite, broke and
you wanted to quickly figure out when it last worked. By
recording the git information captured by the Reporter,
including the commit hash, you can quickly identify the last
commit when the test worked.
The reporting features in the AEtest test infrastructure provide
options to quickly review test results with the Standalone
Reporter or to a complete reporting package provided by the
AEtest Reporter. The data points, metrics, and other rich data
that can be extracted from the AEtest Reporter reporting
package provides endless options for further data analysis and
visualization of the Easypy job results. It’s really up to you on
how you want to utilize the captured results!
Debugging
As with all code, you will find yourself debugging your AEtest
testscripts. Python has a debugger module as part of the
standard library called Python Debugger (pdb). The pdb
debugger (https://docs.python.org/3/library/pdb.html) is used to
set breakpoints, step through source code line by line, and
provide other debugging functions in your Python code. The
one caveat to using pdb is when multiprocessing is involved.
When multiprocessing is used, child processes are forked and
break the functionality of pdb. Since AEtest uses
multiprocessing, most notably with Easypy execution, AEtest
built pdb debugging functionality into the framework.
When running AEtest testscripts, you can pass pdb=True as a
Standard Argument and whenever an error, failure, or
exception occurs, the testing engine pauses and starts an
interactive post_mortem debugging section. The post_mortem
functionality is built natively into pdb. Pdb can also be passed
as a command-line flag (--pdb) when running a job via the pyats
run job command.
Another pdb debugging feature in AEtest is “pause on phrase”,
which is the ability to pause test execution based on any log
messages generated by the script. The log messages include
Python logs, CLI output from devices, and any other logs
captured by the root logger (logging.root). The following actions
are supported when a script is paused on phrase:
Email: Creates a pause file and emails the user. The script may
continue to run once the pause file is deleted or when the
timout limit has been reached.
Pdb: Pauses and opens a pdb debugger
Code: Pauses and opens a Python interactive shell
To enable this feature, you must pass the pause_on Standard
Argument to the script run with a value that provides a path to
a YAML pause file that follows a specific schema to define the
actions to take when the script is paused. Example 4-35 shows a
YAML pause file.
Example 4-35 YAML Pause File
timeout: 600 # pause a maximum of 10 minu
patterns:
- pattern: '.*pass.*' # pause on a
# .*pass.* i
- pattern: '.*state: down.*' # pause when
section: '^common_setup\..*$' # enable for
- pattern: '.*should pause.*' # pause w
section: '^TestcaseTwo\.setup$' # pause o
The default action is to email the user. When a log message
matches a pattern defined in the YAML pause file, the user is
notified via email with the log phrase captured and instructions
on how to remove the pause and continue testing. To change
the action to one of the other two supported actions (pdb or
code), you simply need to specify an “action:” key with one of
those values in the YAML pause file.
Summary
This chapter covered the different components that make up
the AEtest test infrastructure. The AEtest test infrastructure is
the core to pyATS and provides the foundation for all testing.
We reviewed the structure of a testscript, which went through
the different sections including common setup, testcases, and
common cleanup. The AEtest object model reviewed the Python
classes that are the base classes of the different sections from
the testscript structure. The base classes include the TestScript
classes, container classes, function classes, TestItem classes, and
TestContainer classes. After understanding the testscript
structure and the base classes of the AEtest object model, we
dove into the behavior of test results. Section test results can be
determined automatically using assertions or manually using
the different result APIs available (passed, failed, errored,
skipped, blocked, aborted, passx). AEtest allows functions or
methods to run before or after testscripts using pre-processors,
post-prcoessors, or exception-processors. Pre- and post-
processors can be helpful checking the environment before a
test executes, validating section results, and even taking and
comparing snapshots before and after a test section. Exception
processors can take post-execution snapshots of the test
environment if an exception is raised in a test section or
execute debug commands and collect dump files when an
exception occurs.
Once we reviewed the intricacies of the AEtest testscript
structure, object model, and how the results are determined, we
dove into the extensibility of the test infrastructure including
datafiles and test parameters. Datafiles are YAML files that can
provide dynamic test parameters to a testscript, making them
more robust and reusable. Datafiles can be a gamechanger and
should be used when possible to avoid static test parameters.
After seeing how we can make testscripts more extensible and
dynamic, we reviewed how to run testscripts using standalone
execution or through Easypy. Standalone execution is
recommended for development purposes with all logging
outputs being sent to standard output (stdout). The Easypy
execution environment is recommended for “official” test
execution, running testscripts as tasks in a jobfile. The jobfile
produces logs and archives, which is best suited for sanity and
regression testing where reporting and archiving is required.
AEtest provides many ways to control the flow of testscript
execution, including running specific testscases (using UID or a
group of testcases), declaring testcases as must pass,
randomizing testcase execution, declaring a maximum number
of failures per testscript execution, and even customizing
testcase discovery. We then wrapped up the chapter by
reviewing the reporting mechanisms and how to debug
testscripts. AEtest reporting mechanisms include the standalone
reporter, which is used with standalone execution, and the
AEtest reporter, which is used with the Easypy execution
environment and produces a results.json file that includes all
the details of the test execution.
It’s recommended to continue referencing the information from
this chapter as you go through the rest of the book, as many
topics and features are built on the topics discussed in this
chapter.
Chapter 5. pyATS Parsers
In software engineering and computer science, parsing is the
mechanism of translating and comprehending unstructured
data to a script readable form. Parsers are the root of
automation, without them, automation could not understand
the device. There exist multiple ways to parse the device output,
with different packages with each their style. There also exist
multiple ways to communicate with the device (CLI, XML, REST,
YANG, etc.) with each providing different structure for the same
information!
Imagine being able to translate unstructured CLI output into
structured JSON with a simple command! This is where the true
power of pyATS lies; in its parsers and models. The pyATS
metaparser’s role is to unify those packages, into one location
and one structure. A unified collection of parsers, which works
across multiple parser packages, and across multiple
communication protocols and still returns a common structure.
Metaparser allows one script which works across multiple OS,
multiple communication protocol and parsing packages.
This chapter covers the following topics:
Vendor agnostic automation
pyATS learn
pyATS parse
Parsing at the CLI
Parsing with Python
Dictionary query
Differentials
Vendor Agnostic Automation
Cisco’s pyATS framework stands out as a robust network
automation and validation tool, not just for Cisco devices but
for a wide array of network equipment from various vendors.
This vendor-agnostic capability is largely attributed to its
integration with the Genie parsing libraries. Genie, as a part of
the pyATS ecosystem, provides a comprehensive set of parsers
that can interpret and transform raw command outputs from
different network devices into structured data formats, such as
JSON. The beauty of Genie lies in its extensive library that
supports multiple vendors, ensuring that network professionals
are not confined to a single brand or platform. By leveraging
Genie’s parsing libraries, pyATS offers a unified and consistent
approach to network automation, irrespective of the underlying
hardware or software vendor. This flexibility underscores the
framework’s commitment to providing scalable and adaptable
solutions in an ever-evolving networking landscape. Imagine a
parsing infrastructure that:
Promotes more easily maintainable platform/type/version
agnostic testing scripts by deferring operational data parsing to
back-end libraries,
Harmonizes parsing output among various interface
categories, such as CLI, XML and YANG
Enforces only enough structure to give the script writer a
consistent look and feel across interface categories. The parser
helps the script writer create scripts that are consistent in terms
of both style and formatting.
Is future proof, allowing a multitude of existing and yet-to-be-
imagined parsing implementations to coexist in the backend
Enables an elastic parsing ecosystem that is simple enough for
the novice but feature-rich enough for the power user
Leverages the strength of the modern Python-3 language
while still allowing bridging/reuse of Cisco’s vast store of legacy
TCL-based parsers.
While this is a Cisco Press book, and pyATS is provided by Cisco,
it is rare to find a homogeneous network made up of only Cisco
devices. This does not mean that you cannot use pyATS for your
network automation needs as it provides support for many non-
Cisco devices and even agnostic support for REST APIs from any
platform. By selecting an appropriate command output parser
for the supported operating system of choice you can easily
extend network automation to many vendors outside of Cisco. A
quick look at the available parser library will demonstrate this
vendor agnostic approach to pyATS (see Figure 5-1).
Figure 5-1 Operating Systems Supported by pyATS Parsers
pyATS learn
Provided your operating system is supported, pyATS provides
platform agnostic learn models that perform one or more show
commands that are combined and structured into JSON output.
Regardless of operating system a standardized structured
output is returned to the user. There are thirty-two available
learn models:
These high-level models provide abstractions for the commands
they run on various platforms to collect and structure the JSON
output. By clicking on any model (such as BGP, OSPF, or
interfaces), you can drill down into the details of the model,
configuration, and operation as illustrated in Figure 5-2.
Figure 5-2 Model Details
The model link takes you to a PDF explaining how the model
was built, including references to the related YANG models, the
structure hierarchy, as well as the model’s configuration and
operations structure (including the show commands used) as
illustrated in Figure 5-3 through Figure 5-6.
Figure 5-3 Interface Model Details – YANG References
Figure 5-4 Interface Model Details – Structure Hierarchy
Figure 5-5 Interface Model Details – Config Structure
Figure 5-6 Interface Model Details – Show Commands
The configuration link from the model details takes you to the
GitHub repository that contains the source code for the model
(see Figure 5-7). If you need to look at the actual model code,
you can review by operating system in this GitHub repository.
The GitHub respository can be found here:
https://pubhub.devnetcloud.com/media/genie-feature-
browser/docs/_models/interface.pdf.
Figure 5-7 Interface Model Configuration – Git Repository
Drilling down into the IOS XE interface.py file you can see the
actual code used, and even contribute to enhance the code, as
it’s open source, to transform the learn interface model into
structured JSON as illustrated in Figure 5-8.
Figure 5-8 Interface Model Configuration – IOS XE interface.py
Finally, the operation details take you to a different GitHub
repository, this time for the Genie operations model. Figure 5-9
illustrates the corresponding IOS XE Genie operations model for
interfaces. The link to this GitHub respository can be found
here:
https://github.com/CiscoTestAutomation/genielibs/blob/master/p
kgs/conf-pkg/src/genie/libs/conf/interface/iosxe/interface.py
Figure 5-9 Interface Model Configuration – Genie Ops for IOS XE
interface.py
In the realm of network automation and validation, pyATS has
emerged as a beacon of adaptability and efficiency. Its ’learn’
feature, which is platform agnostic, epitomizes the modern
approach to network operations. Rather than being tethered to
specific vendors or architectures, pyATS embraces a holistic
model, ensuring that engineers and network professionals can
seamlessly gather and analyze data across diverse network
environments. This platform-neutral stance not only future-
proofs network operations but also fosters an inclusive
ecosystem where innovation isn’t stifled by proprietary
constraints. As networks continue to evolve and diversify, tools
like pyATS, with their agnostic models, will be pivotal in
ensuring that automation and validation remain consistent,
efficient, and universally applicable. Figure 5-10 summarizes
the pyATS learn model parsing.
Figure 5-10 pyATS Learn Process
pyATS Parsers
The learn models are abstractions that do not require any
specific knowledge of underlying platform commands, but what
if you know the command you want to transform into
structured data? pyATS parsers provide this exact functionality.
There are thousands of parsers available; over 4,500 as of
writing of this book!
Using the Genie documentation, you can filter by both operating
system and command to find the appropriate parser. This will
return exact matches and suggested matches as illustrated in
Figure 5-11
Figure 5-11 pyATS parser filter/pyATS parser filter applied
You can click the results to see the structure of the JSON,
including which fields are mandatory or optional, that the
parser will return to the user (see Figure 5-12).
Figure 5-12 pyATS parser Details
Should you want to view the source code on GitHub you can
click the View Source button in the right margin as illustrated
in Figure 5-13. Figure 5-14 shows an example of the results.
Figure 5-13 pyATS parser View Source
Figure 5-14 pyATS parser Source
The parsed output from either learn model or command
parsers are foundational to network automation with pyATS.
The structured JSON means that, unlike unstructured raw
command line output, Python can interact with it, transform it;
and perform tests against it. The strength of pyATS as a network
automation tool is significantly amplified by its robust parsing
capabilities. These parsers, integral to the framework,
transform the often verbose and unstructured output from
network devices into a coherent, structured format, making
data interpretation and subsequent automation tasks more
efficient and error-free. Instead of manually sifting through
lines of device outputs, engineers can leverage pyATS parsers to
quickly extract the necessary information, streamlining their
workflows. As the complexity of networks grows and the
demand for swift, accurate automation escalates, the role of
pyATS parsers becomes even more critical. They stand as a
testament to the tool’s commitment to simplifying and
enhancing the network automation journey for professionals
across the industry.
Parsing at the CLI
After installing pyATS and creating a valid testbed you can
immediately start using parsers from the command-line
interface (CLI). Using an integrated development environment
(IDE) that has an integrated Linux terminal, such as Visual
Studio Code (VS Code), network engineers can rapidly adopt
pyATS without writing a single line of code! Contrast this
approach, using pyATS parsers and models from the CLI,
against your current practices of gathering information from
the network. Typically network engineers:
Launch an SSH connection tool such as PuTTY
Setup session logging
Input the connection information or find a saved session
Connect and authenticate
Execute commands
Scrape screen or open output file
Analyze raw device output
Figure 5-15 illustrates how you can use the built in Terminal in
VS Code to create a Python virtual environment and then install
pyATS. Figure 5-16 demonstrates how you can then use the
pyATS CLI to learn about a device’s interfaces.
Figure 5-15 VS Code Terminal setup pyATS environment
Figure 5-16 VS Code Terminal with pyATS learn command
example
Contrast this with a simple one-time pyATS environment setup;
a valid testbed object; followed by one-line commands in your
terminal to automatically transform the state of the device into
structured data. This works at scale; and you could learn all
devices inside a testbed or pass the name of the single device
you might want to filter on. As you might notice in Figure 5-16,
the learn command creates three output files:
connection_<hostname>.txt
<model>_<operating system>_<hostname>_console.txt
<model>_<operating system>_<hostname>_ops.txt
The connection file contains information about the initial
connection. This is a good file to investigate if you have any
connection issues if your command fails. There is some
important information in this file such as the commands pyATS
has performed upon initial connection. This testbed is
configured for the default initial connection commands which
you can see in the connection log. These commands can be
suppressed in the testbed if you do not want to alter the device
settings on initial connection. Example 5-1 demonstrates an
example of the successful connection file pyATS saves
automatically.
Example 5-1 Example of a Successful Connection File
2023-08-20 14:03:06,027: %UNICON-INFO: +++ csr10
./connection_csr1000v-1.txt +++
2023-08-20 14:03:06,027: %UNICON-INFO: +++ Unico
(unicon.plugins.iosxe.csr1000v) +++
Welcome to the DevNet Always-On IOS XE Sandbox!
2023-08-20 14:03:06,681: %UNICON-INFO: +++ conne
131.226.217.149 -p 22 -o KexAlgorithms=+diffie-h
HostKeyAlgorithms=+ssh-rsa, id: 139886857701408
2023-08-20 14:03:06,681: %UNICON-INFO: connectio
(
[email protected]) Password:
csr1000v-1#
2023-08-20 14:03:07,074: %UNICON-INFO: +++ initi
2023-08-20 14:03:07,140: %UNICON-INFO: +++ csr10
command 'term length 0' +++
term length 0
csr1000v-1#
2023-08-20 14:03:07,512: %UNICON-INFO: +++ csr10
command 'term width 0' +++
term width 0
csr1000v-1#
2023-08-20 14:03:07,787: %UNICON-INFO: +++ csr10
command 'show version' +++
show version
Cisco IOS XE Software, Version 16.09.03
Cisco IOS Software [Fuji], Virtual XE Software (
j
Version 16.9.3, RELEASE SOFTWARE (fc2)
Technical Support: http://www.cisco.com/techsupp
Copyright (c) 1986-2019 by Cisco Systems, Inc.
Compiled Wed 20-Mar-19 07:56 by mcpre
The console output is self-descriptive: the output from the raw
console output of the session (see Example 5-2). This is similar
to PuTTY session logging information.
Example 5-2 Example of Console Output
+++ csr1000v-1 with via 'cli': executing command
show vrf
Name Default RD
AAA <not set>
CISCO <not set>
CISCO2 <not set>
default1 <not set>
csr1000v-1#
+++ csr1000v-1 with via 'cli': executing command
show interfaces
GigabitEthernet1 is up, line protocol is up
Hardware is CSR vNIC, address is 0050.56bf.937
Description: MANAGEMENT INTERFACE - DON'T TOUC
Internet address is 10.10.20.48/24
csr1000v-1#
+++ csr1000v-1 with via 'cli': executing command
show interfaces accounting
GigabitEthernet1 MANAGEMENT INTERFACE - DON'T TO
Protocol Pkts In Chars In
Other 80 4800
IP 680109 102362922
ARP 80 4800
GigabitEthernet2 Wizkid wuz here
Protocol Pkts In Chars In
Other 0 0
IP 98 8362
ARP 0 0
Interface GigabitEthernet3 is disabled
csr1000v-1#
+++ csr1000v-1 with via 'cli': executing command
show ip interface
GigabitEthernet1 is up, line protocol is up
Internet address is 10.10.20.48/24
Broadcast address is 255.255.255.255
csr1000v-1#
+++ csr1000v-1 with via 'cli': executing command
show ipv6 interface
csr1000v-1#
Could not learn <class
'genie.libs.parser.iosxe.show_interface.ShowIpv6
Show Command: show ipv6 interface
Parser Output is empty
+===============================================
================================================
| Commands for learning feature 'Interface'
|
+===============================================
================================================
| - Parsed commands
|
|-----------------------------------------------
------------------------------------------------
| cmd: <class 'genie.libs.parser.iosxe.show_vr
{'vrf':''}
| cmd: <class 'genie.libs.parser.iosxe.show_in
arguments: {'interface':''}
| cmd: <class
'genie.libs.parser.iosxe.show_interface.ShowInte
{'interface':''}
| cmd: <class 'genie.libs.parser.iosxe.show_in
arguments: {'interface':''}
|===============================================
================================================
| - Commands with empty output
|
|-----------------------------------------------
------------------------------------------------
| cmd: <class 'genie.libs.parser.iosxe.show_in
arguments: {'interface':''}
|===============================================
|
================================================
As you can see, pyATS models are capturing the output of
several commands related, in this example, to interfaces. This
alone is already more efficient and valuable and a single
session running these commands and capturing them manually.
Imagine gathering this information from an entire topology; it’s
rare to need data from a single device in the network. Next,
Example 5-3 shows the structured data that pyATS is assembling
from the raw output in Example 5-2.
Example 5-3 Example of ops output
{
"_exclude": [
"in_discards",
"in_octets",
"in_pkts",
"last_clear",
"out_octets",
"out_pkts",
"in_rate",
"out_rate",
"in_errors",
"in_crc_errors",
_ _
"in_rate_pkts",
"out_rate_pkts",
"in_broadcast_pkts",
"out_broadcast_pkts",
"in_multicast_pkts",
"out_multicast_pkts",
"in_unicast_pkts",
"out_unicast_pkts",
"last_change",
"mac_address",
"phys_address",
"((t|T)unnel.*)",
"(Null.*)",
"chars_out",
"chars_in",
"pkts_out",
"pkts_in",
"mgmt0"
],
"attributes": null,
"commands": null,
"connections": null,
"context_manager": {},
"info": {
"GigabitEthernet1": {
"accounting": {
"arp": {
"chars_in": 4800,
_
"chars_out": 1860,
"pkts_in": 80,
"pkts_out": 31
},
"ip": {
"chars_in": 102362922,
"chars_out": 135048968,
"pkts_in": 680109,
"pkts_out": 587982
},
"other": {
"chars_in": 4800,
"chars_out": 1860,
"pkts_in": 80,
"pkts_out": 31
}
},
"auto_negotiate": true,
"bandwidth": 1000000,
"counters": {
"in_broadcast_pkts": 0,
"in_crc_errors": 0,
"in_errors": 0,
"in_mac_pause_frames": 0,
"in_multicast_pkts": 0,
"in_octets": 102366433,
"in_pkts": 680174,
"last_clear": "never",
_
"out_errors": 0,
"out_mac_pause_frames": 0,
"out_octets": 135051811,
"out_pkts": 588020,
"rate": {
"in_rate": 1000,
"in_rate_pkts": 1,
"load_interval": 300,
"out_rate": 1000,
"out_rate_pkts": 1
}
},
"delay": 10,
"description": "MANAGEMENT INTERFACE - DON
"duplex_mode": "full",
"enabled": true,
"encapsulation": {
"encapsulation": "arpa"
},
"flow_control": {
"receive": false,
"send": false
},
"ipv4": {
"10.10.20.48/24": {
"ip": "10.10.20.48",
"prefix_length": "24",
"secondary": false
y
}
},
"mac_address": "0050.56bf.9379",
"mtu": 1500,
"oper_status": "up",
"phys_address": "0050.56bf.9379",
"port_channel": {
"port_channel_member": false
},
"port_speed": "1000mbps",
"switchport_enable": false,
"type": "CSR vNIC"
},
"Loopback500": {...
},
"Nve1": {...
},
"VirtualPortGroup0": {...
},
"raw_data": false
}
Inside of VS Code, this .txt file’s natural JSON structure allows
for interactivity such as collapsing or expanding individual
interfaces. The information most engineers want from the
ops.txt file is nested inside the .info parent key. This is
important when you move onto start using these models
Pythonically.
For help with pyATS learn, you can issue the following
command at the CLI:
$ pyats learn --help
Example 5-4 shows the results of issuing this command.
Example 5-4 pyATS learn console help
(pyats_parsing) johncapobianco@Desktop:~/test_dr
help
Usage:
pyats learn [commands] [options]
Example
-------
pyats learn ospf --testbed-file /path/to/testb
pyats learn ospf --testbed-file /path/to/testb
features_snapshots/ --devices "nxos-osv-1"
pyats learn ospf config interface bgp platform
/path/to/testbed.yaml --output features_snapshot
Description:
p
Learn device feature and parse into Python dat
List of available features: https://pubhub
feature-browser/docs/#/models
Learn Options:
ops List of Feature to learn
can instead be
provided to learn all fe
--testbed-file TESTBED_FILE
specify testbed_file yam
--devices [DEVICES ...]
"DEVICE_1 DEVICE_2", spa
will learn on all
devices (Optional)
--output OUTPUT Which directory to store
current directory
(Optional)
--single-process Learn one device at the
(Optional)
--via [VIA ...] List of connection to us
--archive-dir Directory to store a .zi
--learn-hostname Learn the device hostnam
--learn-os Learn the device OS duri
General Options:
-h, --help Show help
-v, --verbose Give more output, additi
p
-q, --quiet Give less output, additi
to WARNING, ERROR,
and CRITICAL logging lev
If you want to parse a specific command against your testbed
instead of learning from the pyATS models at the CLI simply
run the parse command followed by the supported command
and your testbed object as demonstrated in Figure 5-17.
Figure 5-17 VS Code Terminal with pyATS parse command
example
Unlike learn, pyATS parse immediately returns the JSON
representation of the output from the show command parsed.
There are no files created and pyATS simply prints the JSON to
the screen as Example 5-5 demonstrates.
JOHN Fixed
Example 5-5 pyATS parse console help
(pyats_parsing) johncapobianco@Desktop:~/test_dr
help
Usage:
pyats parse [commands] [options]
Example
-------
pyats parse "show interfaces" --testbed-file /
uut
pyats parse "show interfaces" --testbed-file /
uut --output my_parsed_output/
pyats parse "show interfaces" "show version" -
/path/to/testbed.yaml --devices helper
Description:
Parse CLI commands into Pythonic datastructure
Parse Options:
COMMANDS Show command(s) to parse
parse all commands
p
--testbed-file TESTBED_FILE
specify testbed_file yam
--devices [DEVICES ...]
Devices to issue command
--output OUTPUT Directory to store outpu
prints parsed JSON
output to screen. (Optio
--via [VIA ...] List of connection to us
--fuzzy Enable fuzzy matching fo
--raw Store device output with
--timeout TIMEOUT Devices execution timeou
--developer Parser coloured develope
--archive-dir Directory to store a .zi
--learn-hostname Learn the device hostnam
--learn-os Learn the device OS duri
--rest run rest commands
General Options:
-h, --help Show help
-v, --verbose Give more output, additi
-q, --quiet Give less output, additi
to WARNING, ERROR,
and CRITICAL logging lev
As you can see from the help, you can add the --output flag and
specify the JSON file to send the output of the parsed command.
The traditional methods of network operations often involve
manual configurations, tedious verifications, and error-prone
troubleshooting. These methods not only consume a significant
amount of time but also introduce potential risks to network
stability.
With pyATS’s parse feature, network engineers can effortlessly
convert raw CLI outputs into structured data. This structured
data can then be easily analyzed, compared, and integrated into
other systems or tools. An evolution is taking place move away
from manually sifting through pages of CLI outputs to find that
one piece of information. With pyATS, it’s all about efficiency
and precision.
The learn feature, on the other hand, takes automation a step
further. Instead of just parsing outputs, it understands the
network’s state and can provide insights into various
operational aspects. Whether it’s understanding interface
states, routing tables, or device health, the learn functionality
offers a comprehensive view without the manual labor.
In essence, pyATS’s parse and learn capabilities represent a
paradigm shift in how we approach network operations. By
automating these processes, we not only save time and reduce
errors but also free up network engineers to focus on more
strategic tasks, driving innovation and ensuring network
resilience.
Parsing with Python
Both the learn and parse features are available not only at the
terminal CLI; they are also available as Python functions we can
use to easily transform raw output to JSON inside a script or
pyATS job. It is essential to recognize that these capabilities are
not confined to the CLI alone. In fact, pyATS offers a rich Python
library that allows for the integration of learn and parse
directly into Python scripts and applications. This means that
network professionals can seamlessly incorporate these
functionalities into more complex automation workflows,
custom applications, or even integrate them with other Python-
based tools. By leveraging pyATS within Python, engineers can
achieve a higher degree of flexibility, programmability, and
scalability in their network operations. Example 5-6 shows how
you can drop into an interactive pyATS Python shell directly
from BASH.
Example 5-6 pyATS shell
bash$ pyats shell --testbed-file testbed.yaml
>>> testbed.devices['csr1000v-1'].connect()
>>> output = testbed.devices['csr1000v-1'].parse
>>> print(output)
Parsers can also be included inside pyATS jobs (much like in
Example 5-6) and are the basis for test-driven automation; keys
and values make up the source of data tested against. Example
5-7 demonstrates how parsers and can be incorporated with the
.parse() function.???John: Please complete the example
reference to introduce/put in context the examplke that follows.
Thanks,. Chris
Example 5-7 pyATS script sample showing parsers
@aetest.test
def capture_show_interfaces(self):
self.parsed_interfaces = self.device.par
with open(f'{self.device.alias}_Show_Int
f.write(json.dumps(self.interface_da
sort_keys=True))
The transformative potential of pyATS is not just limited to its
standalone capabilities but is profoundly amplified when
integrated into Python scripts. By harnessing the learn and
parse functionalities within Python, network engineers can
craft tailored solutions, automate intricate workflows, and
ensure consistent network insights. Furthermore, with pyATS
jobs, there’s an added layer of automation and scheduling,
allowing for periodic network checks, validations, and
reporting. This synergy between Python and pyATS represents a
new frontier in network operations, where precision meets
automation, and proactive network management becomes the
norm rather than the exception. Now that you have captured
network state as JSON there are even more powerful
capabilities we can scaffold on top of.
A concrete example of how this normalization of data can
benefit the network engineed might be an audit of all IP
addresses in use across an enterprise’s routers. This could take
hours or days to manually document and analyze. Using pyATS
learn modules and a simple testbed containing the target routes
the process is not only faster but easier; following up the
structured JSON with a built in Python tool such as the
ipaddress module
(https://docs.python.org/3/howto/ipaddress.html) the entire
audit could be automated.
Dictionary Query
In the vast realm of programming, dictionaries stand out as one
of the most versatile and powerful data structures, especially in
languages like Python. At their core, dictionaries are collections
of key-value pairs, offering a unique way to store and organize
data. However, as these structures grow in complexity and size,
efficiently retrieving specific information becomes a challenge.
This is where dictionary querying comes into play. Dictionary
querying allows developers to sift through these intricate
structures, pinpointing the exact data they need with precision
and speed. Filtering can be done based on specific criteria,
searching for nested keys, or trying to extract a subset of the
dictionary. Querying techniques can simplify these tasks and
enhance the efficiency of data retrieval. Dictionary query (Dq)
is a Python library for querying Python dictionaries in a very
intuitive syntaxas demonstrated in Example 5-8.
Example 5-8 Dq Examples with pyATS Parsed Data
# Find all the bgp neighbors which are Establish
>>> output = device.parse('show bgp neighbors')
>>> output.q.contains('Established').get_values(
# effectively, device.parse() API returns a modi
y p
# enabling you to make quick accesses to the Dq
# explicit. This is equivalent to:
>>> from genie.utils import Dq
>>> output = device.parse('show bgp neighbors')
>>> Dq(output).contains('Established').get_value
['10.2.2.2']
Find all the routes which are connected.
output = device.parse('show ip route')
# Find all the routes which are Connected
output.q.contains('connected').get_values('route
['10.0.1.0/24', '10.1.1.1/32', '10.11.11.11/32',
Find all the ospf routes.
# Find all the routes for Ospf
output.q.contains('ospf').get_values('routes')
['10.0.2.0/24', '10.2.2.2/32']
Typically to do the same queries with Python, you would need
for loops, if statement, and so on. Dq simplify the whole
process! Dq also supports regex (regular expression) as
demonstrated in Example 5-9.
Example 5-9 Dq RegEx Example
# Check if the module in line card #4 contains a
# and its value is ok or active
output.q.contains('lc').contains('4').contains_k
value_regex=True)
{'lc': {'4': {'NX-OSv Ethernet Module': {'status
Dq supports the following chain’d actions:
contains
not_contains
get_values
contains_key_value
not_contains_key_value
value_operator
sum_value_operator
count
raw
reconstruct
query_validator
str_to_dq_query
Timeout
TempResult
In the evolving landscape of network automation, the ability to
swiftly and accurately extract relevant data is paramount. This
is where Dq (Dictionary Query) shines. Within the framework
of pyATS, Dq is a powerful tool for querying the structured JSON
collected by the parsers and models. By allowing engineers to
delve deep into structured data returned from network devices,
Dq streamlines the process of data extraction and
manipulation. Instead of wading through vast amounts of
information, engineers can pinpoint the exact data they need,
whether it’s a specific interface status, routing information, or
device health metrics. This precision, combined with the
automation capabilities of pyATS, ensures that network
operations are not only more efficient but also more accurate.
In essence, Dq bridges the gap between raw data and actionable
insights, propelling network automation to new heights.
Differentials
Differentials, or Diff, are an extremely powerful pyATS library
that can be used to compare parsed JSON data and return
Linux-like +/- differentials against our dictionaries. In the
dynamic world of network automation, staying ahead means
not just capturing data, but understanding its evolution. Enter
the Diff library from pyATS—a tool that’s revolutionizing the
way we perceive changes in our network environments. No
longer are network engineers confined to manually comparing
vast datasets or configurations. With Diff, they can effortlessly
spot differences, track alterations, and monitor transitions in
their network data. This isn’t just about identifying changes; it’s
about understanding the story behind them. Whether it’s
tracking configuration drifts, validating network changes post-
deployment, or ensuring compliance, the Diff library improves
visibility into network changed.
Example 5-10 demonstrates some basic, abstract examples of
Diff.
Example 5-10 Differential Abstract Examples
from genie.utils.diff import Diff
a = {'a':5, 'b':7, 'c':{'ca':8, 'cb':9}}
b = {'a':5, 'f':7, 'c':{'ca':8, 'cb':9}}
dd = Diff(a,b)
dd.findDiff()
print(dd)
+f: 7
-b: 7
# It also supports an exclude key, for the keys
from genie.utils.diff import Diff
a = {'a':1, 'b':2, 'c':{'ca':9}}
c = {'a':2, 'c':3, 'd':7, 'c':{'ca':{'d':9}}}
dd = Diff(a, c, exclude=['d'])
dd.findDiff()
print(dd)
-b: 2
+a: 2
-a: 1
c:
+ ca:
+ d: 9
- ca: 9
# You can also only see which one were added
dd = Diff(a, c, mode='add')
dd.findDiff()
print(dd)
+d: 7
# Or Removed
dd = Diff(a, c, mode='remove')
dd.findDiff()
print(dd)
-b: 2
# Or modified, which mean it existed, but the va
dd = Diff(a, c, mode='modified')
dd.findDiff()
print(dd)
+a: 2
-a: 1
c:
+ ca:
+ d: 9
- ca: 9
# If you need a string representation of added i
can do
a = {'a': 1, 'w': 5, 'p': {'q': {'a': 6}}}
c = {'b': 2, 'c': {'d': {'e': {'f': 2, 'g': 5}}}
dd = Diff(a, c)
dd.findDiff()
print(dd.diff_string('+'))
b 2
c
d
e
f 2
g 5
# Similarly, you can get a string for the remove
dd = Diff(a, c)
dd.findDiff()
print(dd.diff_string('-'))
a 1
p
q
a 6
w 5
# To print unchanged entries in a list or tuple,
option like so
a = { 'key': {'value': [1, 2, 3, 4]}}
b = { 'key': {'value': [1, 3, 3, 4]}}
dd = Diff(a, b, verbose=True)
dd.findDiff()
print(dd)
key:
value:
index[0]: 1
- index[1]: 2
+ index[1]: 3
index[2]: 3
index[3]: 4
Example 5-11 is an end-to-end differential example in Python
where we first capture the network state; make an arbitrary
change (in this case; add a loopback), then re-capture the new
state of the device, and finally perform a Diff.
Example 5-11 Differential Network Automation Example
import time
import difflib
import logging
from pyats import aetest
from genie.utils.diff import Diff
## Get Logger
log = logging.getLogger(__name__)
## AE TEST SETUP
class common_setup(aetest.CommonSetup):
"""Common Setup Section"""
#Connect to testbed
@aetest.subsection
def connect_to_devices(self,testbed):
_ _
testbed.connect()
#Mark test case for loops in case there are
@aetest.subsection
def loop_mark(self,testbed):
aetest.loop.mark(Chat_With_Catalyst, dev
class Chat_With_Catalyst(aetest.Testcase):
"""A sample differential"""
# set individual device as current iteration
@aetest.test
def setup(self,testbed,device_name):
self.device = testbed.devices[device_nam
@aetest.test
def capture_show_run(self):
self.show_run = self.device.execute("sho
@aetest.test
def capture_show_ip_interface(self):
self.show_interfaces = self.device.parse
@aetest.test
def capture_show_ip_interface_brief(self):
self.show_ip_interface_brief = self.devi
brief")
@aetest.test
def capture_show_ip_route(self):
self.show_ip_route = self.device.parse("
@aetest.test
def make_change(self):
self.device.configure(f'''interface loop
description "T
ip address 192
no shut''')
@aetest.test
def recapture_show_run(self):
self.new_show_run = self.device.execute(
@aetest.test
def recapture_show_ip_interface(self):
self.new_show_interfaces = self.device.p
@aetest.test
def recapture_show_ip_interface_brief(self):
self.new_show_ip_interface_brief = self.
brief")
@aetest.test
def recapture_show_ip_route(self):
time.sleep(10)
self.new_show_ip_route = self.device.par
_ _ p_ p
@aetest.test
def perform_show_run_diff(self):
pre_change = self.show_run
post_change = self.new_show_run
diff = difflib.ndiff(pre_change.splitlin
show_run_diff_output = '\n'.join(line fo
line.startswith('-') or line.startswith('+'))
with open(f'Show_Run_Diff.txt', 'w') as
f.write(show_run_diff_output)
@aetest.test
def perform_show_interface_diff(self):
interface_pre_change = self.show_interfa
interface_post_change = self.new_show_in
interface_diff = Diff(interface_pre_chan
interface_diff.findDiff()
with open(f'Show_Interfaces_Diff.txt', '
f.write(str(interface_diff))
@aetest.test
def perform_show_ip_interface_brief_diff(sel
ip_interface_brief_pre_change = self.sho
ip_interface_brief_post_change = self.ne
ip_interface_brief_diff = Diff(ip_interf
ip_interface_brief_post_change)
ip_interface_brief_diff.findDiff()
with open(f'Show_IP_Interface_Brief_Diff
p _ _ _ _
f.write(str(ip_interface_brief_diff)
@aetest.test
def perform_show_ip_route_diff(self):
ip_route_pre_change = self.show_ip_route
ip_route_post_change = self.new_show_ip_
ip_route_diff = Diff(ip_route_pre_change
ip_route_diff.findDiff()
with open(f'Show_IP_Route_Diff.txt', 'w'
f.write(str(ip_route_diff))
class common_cleanup(aetest.CommonCleanup):
"""Common Cleanup Section"""
@aetest.subsection
def disconnect_from_devices(self,testbed):
testbed.disconnect()
As demonstrated in Example 5-11, Diff can be used for change
validation. Point-in-time snapshots could be used with Diff.
There is unlimited potential and possibilities with the
scaffolding tools and libraries like Dq and Diff with pyATS learn
models and parsed output.
Summary
In this chapter, we looked at on an enlightening journey
through the expansive area of network automation. We began
by underscoring the key role of vendor-agnostic automation,
highlighting its promise of flexibility and adaptability in a
diverse networking landscape. Our exploration then led us to
the dynamic capabilities of pyATS, with the ’learn’ feature
offering an automated lens into the network’s state and the
’parse’ functionality transforming raw CLI outputs into
structured, actionable data. This versatility was further
showcased as we discussed direct parsing from the command-
line interface and its seamless integration within Python
scripts. The chapter also introduced the concept of Dictionary
Query (Dq), a tool that simplifies the extraction of specific
information from intricate data structures. We concluded our
exploration by unveiling the transformative potential of the Diff
library in pyATS, emphasizing its ability to track and
understand nuanced changes in network data. Together, these
insights paint a vivid picture of a future where network
automation is not just about management but improved
comprehension.
Chapter 6. Test-Driven Development
In the realm of network engineering and operations, the
deficiency of a unified methodology for creating, examining,
and preserving solutions is quite palpable. Each enterprise
tends to navigate its unique path concerning change
management and infrastructure solution development. This
fragmented approach is most noticeable when organizations
venture into the automation domain, which predominantly
hinges on standardized configurations and templates.
Traditionally, networks are conceived, erected, and then
scrutinized through manual processes. This scrutiny typically
comes at the tail end of the delivery cycle, set against the
backdrop of large and intricate topologies. The toolkit available
to network engineers for this purpose has been rather
primitive, mostly confined to outdated tools like syslog and
SNMP. Moreover, the prevailing ethos has been more reactive
than proactive when it comes to incident response.
However, amidst this scenario, there’s a silver lining. The
domain of software development presents a methodical
approach known as test-driven development (TDD) that could be
a game changer for network engineers. When synergized with
Cisco’s pyATS, TDD lays down a structured pathway, inclusive of
guidelines, scientific methodologies, and the necessary
instrumentation for architecting robust, modern automated
solutions.
This venture into a more standardized approach becomes even
more pertinent in the context of larger networks where an
entire industry thrives on monitoring and management tooling.
Herein, the proof-of-concept (POC) or proof-of-value (POV)
scenarios have been instrumental. Cisco has tailored its own
Center of Proof of Concept (CPOC) for this endeavor, fostering
extended engagements with clientele. Notably, entities with
substantial stakes, like banks, often opt for pre-stage
environments owing to the hefty costs associated with potential
failures. These POCs are envisioned to have well-defined
outcomes, and given the high stakes, scripts are meticulously
crafted to run tests. However, the caveat has been the high-level
skill requirement and the extensive timeframe, stretching over
weeks or even months, necessitated for planning and logistics.
The tools of the trade have traditionally been Regex, BASH, and
similar technologies.
Yet, the landscape is shifting with the advent of pyATS, which
has emerged as a catalyst in simplifying this intricate process.
The utilization of pyATS substantially diminishes the hurdles,
rendering the process less complex, more cost-effective, and
quicker. Moreover, it opens the doors to a wider audience,
democratizing the once highly specialized work. This transition
not only accelerates the pace at which network solutions are
tested and deployed but also elevates the accessibility and
efficiency of network management practices, paving the way
for a more proactive and standardized approach to network
engineering in contemporary enterprise environments.
This chapter covers the following topics:
Introduction to TDD
Applying TDD to network automation
Introduction to pyATS
The pyATS framework
Introduction to Test-Driven Development
Test-driven development (TDD), at its core, is a process that
converts requirements into test cases. Kent Beck, creator of the
“extreme programming” approach to software development, is
credited with “rediscovering” the test-driven development
technique. One of the seventeen original signatories on the
revolutionary “Manifesto for Agile Software Development”
(agilemanifesto.org), Beck stressed the importance of simplicity
over complexity. It is important to understand the twelve
principles behind the Agile Manifesto and you can see where
the test-driven approach emerged:
Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software.
Welcome changing requirements, even late in
development. Agile processes harness change for
the customer’s competitive advantage.
Deliver working software frequently, from a
couple of weeks to a couple of months, with a
preference to the shorter timescale.
Business people and developers must work
together daily throughout the project.
Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done.
The most efficient and effective method of
conveying information to and within a development
team is face-to-face conversation.
Working software is the primary measure of progress.
Agile processes promote sustainable development.
The sponsors, developers, and users should be able
to maintain a constant pace indefinitely.
Continuous attention to technical excellence
and good design enhances agility.
Simplicity--the art of maximizing the amount
of work not done--is essential.
The best architectures, requirements, and designs
emerge from self-organizing teams.
At regular intervals, the team reflects on how
to become more effective, then tunes and adjusts
its behavior accordingly.
Along with other fundamental principles that define Agile,
“simplicity—the art of maximizing the amount of work not done
—is essential” is foundational to the test-driven development
approach to network automation. Another important figure in
test-driven development is “Uncle Bob”; Robert C. Martin
(cleancoder.com). Robert is a programmer, speaker, and teacher
who contributed the concise set of rules that govern TDD. There
are only three rules to follow; and the rules are things you are
not permitted to do as a TDD developer1:
1. You are not allowed to write any production code unless it is
to make a failing unit test pass.
2. You are not allowed to write any more of a unit test than is
sufficient to fail (and compilation failures are failures)
3. You are not allowed to write any more production code than
is sufficient to pass the one failing unit test.
These rules directly relate to the Agile Manifesto’s principle of
simplicity and will guide your journey into the world of test-
driven automation with pyATS. Along with the rules there is an
equally simple three-step approach to development:
1. Write a failing test
2. Make the test pass
3. Refactor
First, you will create a new test. Run all tests, including the new
test, which should fail. This might seem counter intuitive, but
we want to be as scientific as possible, meaning we need to
validate the failed state of the test first. Next, write the simplest
code possible that will make the new test pass. All tests should
now pass. Refactor as needed; improving the code and
repeating the testing cycle after each refactor to ensure
refactoring quality. Next, add a new test and repeat the cycle
until all requirements and use cases are covered by tests.
Some best practices to keep in mind as you adopt this new
approach:
Keep the unit tests small: For example, do not write a test to
check if an interface is healthy based on all the counters; write
an interface CRC error test; a half-duplex test; individual
counters
The general test structure:
Setup
Execution
Validation
Cleanup
Limit, or eliminate, dependencies between tests
Complex is fine; complicated is not
Avoid “all-knowing” tests: See: keep the unit test small
Applying Test-Driven Development to Network
Automation
Test-driven development aligns perfectly with network
automation with Cisco pyATS. Network engineers already
gather and use business requirements to drive their network
designs and implementations; TDD simply takes it a step further
deriving unit tests from these business requirements. There is
often a direct mapping of these requirements to an individual
test case. Easily applied in practice, both network state, as
determined by the output of show commands, and the
configuration itself can be tested with pyATS. Intent, as
expressed in a source of truth, can also be tested for and
enforced with configuration automation using pyATS.
Ultimately, Continuous Integration and Continuous Delivery
(CI/CD), pipelines can be established that incorporate our test
cases in a fully hand-off automation solution with tools such as
pyATS and Cisco XPRESSO. XPRESSO, covered in depth in
Chapter 20, “XPRESSO,” is a pyATS web-based GUI dashboard
that allows for scheduleding and orcestrating pyATS job.
Test-Driven Development (TDD) indeed seamlessly dovetails
with network automation, especially when augmented by tools
like Cisco’s pyATS. The practice propels the traditional network
engineering approach—driven by business requirements—a
notch higher by deriving unit tests from these very
requirements. Oftentimes, there’s a straightforward
correspondence between these requirements and individual
test cases. In practical application, pyATS facilitates testing both
the network state, as assessed by the output of show commands,
and the configuration itself. Furthermore, the tool is adept at
evaluating and enforcing intent as delineated in a source of
truth, through configuration automation. This sets the stage for
Continuous Integration and Continuous Delivery (CI/CD)
pipelines, making it feasible to embed our test cases within a
fully hands-off automation solution, further enriched with tools
like Cisco XPRESSO.
Delving into a specific example, let’s unravel a TDD workflow
utilizing pyATS to validate that the count of Cisco Discovery
Protocol (CDP) neighbors is indeed four.
1. Requirement Gathering: Business Requirement: Ensure that
the network device has exactly four CDP neighbors.
2. Test Case Derivation: Derive a test case to confirm that the
count of CDP neighbors is four.
3. Initial Test Execution: Run the test using pyATS against the
current network configuration. At this juncture, the test is likely
to fail since the requisite configurations have not been
implemented yet.
4. Network Configuration: Configure the network device to
meet the business requirement of having four CDP neighbors.
5. Test Execution: Execute the test again using pyATS. If the
configuration is accurate, the test should pass. If not, it will fail,
indicating that further configuration adjustments are necessary.
6. Refinement (if necessary): If the test fails, refine the
network configuration and re-run the test until it passes,
thereby confirming that the network configuration aligns with
the business requirement.
7. Integration: Integrate this test into a CI/CD pipeline using
tools like Cisco XPRESSO for continuous monitoring and
validation.
8. Automation: Automate the entire process from configuration
to testing, ensuring that any future modifications adhere to the
business requirement of having exactly four CDP neighbors.
This workflow exemplifies how TDD, fortified with pyATS,
provides a structured, iterative, and automated methodology
for validating network configurations against business
requirements. Through continuous testing and automation,
network engineers can assure that the network’s state
perpetually aligns with organizational objectives, thereby
fostering a more robust, reliable, and efficient network
infrastructure.
There is particular nuanced application of Test-Driven
Development (TDD) principles in network engineering,
particularly in brownfield environments—where integration
with existing architectures is a common scenario. Unlike
greenfield settings where network engineers have the liberty to
start afresh, brownfield scenarios demand a meticulous
integration approach to ensure that existing business and
operational requisites are not compromised.
Adherence to Three Laws in Network Engineering Context:
The three laws of TDD, originally crafted for software
development, find a broader spectrum of application in
network engineering, extending beyond unit tests to system
tests. The essence of these laws remains intact—to ensure that
every piece of code or configuration is validated through tests.
Brownfield Environments: In brownfield settings, a growing
catalog of unit and system tests becomes imperative. These
tests, mirroring the existing active business and operational
requisites, form a baseline that must always be met. While
integrating new configurations or systems, it’s crucial that these
baseline tests pass, affirming that the integration has not
disrupted the existing setup. This scenario does present edge
cases/exceptions to the 1st law of TDD (writing a failing test
first), as there will be tests that should and will pass due to pre-
existing configurations.
Pseudocode Examples: Here is an example using pseudo-
code. Consider a brownfield scenario where the task is to
ensure a specific VLAN configuration on a switch while
ensuring that existing configurations are not disrupted. Here’s a
simplified TDD workflow with pseudocode:
Existing Tests:
Run existing unit and system tests to ensure the current setup is
sound.
python
Copy code
def test_existing_setup():
# Existing tests
pass
New Test Case:
Write a new test case to check the desired VLAN configuration.
def test_vlan_configuration():
vlan_config = pyats parse ('show vlan')
assert 'VLAN10' in vlan_config
Run the new test; it may fail if the VLAN isn’t configured yet.
def configure_VLAN():
pyats configure ('vlan 10')
Re-run Tests. Re-run all tests (existing and new) to ensure both
the new and existing configurations are correct.
pyats run job test_existing_setup
pyats run job test_vlan_configuration
This TDD workflow exemplifies how both existing and new
requirements can be validated in a brownfield environment.
While the pseudocode is simplified, it gives a gist of how tests
for configuration and state can be structured and run to ensure
the robustness of network configurations amidst evolving
requirements. This practical approach, coupled with tools like
pyATS, fosters a disciplined testing culture, bridging the
traditional network engineering practices with modern,
automated, and test-driven methodologies.
Introduction to pyATS
Now that NetDevOps, test-driven development, and the Agile
methodology have been introduced, Cisco’s Python Automation
Test Solution, pyATS, will be explained. pyATS is made up of
both a framework and libraries which can be used for
automation, testing, network assurance, and much more. pyATS
considers everything an object, including network state and
configurations, which are made available as structured data
that can be acted upon either consuming or delivering
commands programmatically. In a way, pyATS acts as an API
toolkit that abstracts the complexity of the underlying parsing
and connectivity layers of network automation.
pyATS was originally developed for internal Cisco engineering
and was made available to the public for general use in 2017.
Cisco runs millions of internal tests monthly with pyATS which
is the de facto standard testing framework internally at Cisco.
Holistically, pyATS can be considered as an automation
ecosystem made up of both the core framework and the
software development kit (SDK), or libraries, that extend the
core functionality. Multi-platform and multi-vendor, pyATS has
become an integral part of CI/CD, connectivity, configuration,
regression, scale, and overall solution testing for thousands of
developers around the world both internally at Cisco and
externally around the world. As illustrated in Figure 6-1, the
main components of pyATS include the core framework, or
toolbox; the SDK and libraries, formerly known as Genie, and
upper layer business logic integrations. pyATS is an all-purpose
generic framework while the libraries extend the capabilities
and specialize in network device automation and validation.
Figure 6-1 Layers of pyATS
The pyATS Framework
The pyATS framework can be broken down into the main
components and supporting components. The main
components include:
AEtest test infrastructure
Easypy runtime environment
Testbed and topology information
Testbed and device cleaning
The supporting components of pyATS include:
Asynchronous library
Data structures
TCL integration
Logging
Result objects
Reporter
Utilities
Robot framework support
Manifest
AEtest
Automation Easy Testing (AEtest) is the pyATS standard test
engineering automation harness. The test cases translate into
this simple and straight forward foundation of pyATS testing
jobs. AEtest is implemented as aetest and was designed to be
fully object-oriented. Those familiar with Python’s unittest and
pytest, which both inspired the architectural design of AEtest,
should be able to quickly adopt AEtest. The high-level design
features include that working with AEtest should be a straight-
forward, pythonic experience with its object-oriented design.
AEtest is included with the pyATS full installation using pip: pip
install pyats[full] or is also available as a standalone package.
To install AETest as a standalone library you can use pip as
follows: pip install pyats.aetest
Part of the simplicity of easy testing is the block- based
approach to test section breakdowns:
Common Setup with subsections
Testcases with setup/tests/cleanup
Common Cleanup with subsections
Import aetest from pyATS and create the common setup where
you can establish connectivity with your testbed as
demonstrated in Example 6-1.
Example 6-1 Using AETest to connect to a device
from pyats import aetest
class CommonSetup(aetest.CommonSetup):
@aetest.subsection
def connect_to_device(self, testbed):
# connect to testbed devices
for device in testbed:
device.connect()
Once connectivity has been established tests can be performed
against network state as demonstrated in Example 6-2.
Example 6-2 Parsing “show interfaces” and printing the JSON
output
class SimpleTestcase(aetest.Testcase):
@aetest.test
def print_interface(self, testbed):
# print each device interface
for device in testbed:
interface = device.parse("show inter
print(interface)
Finally, we tear down and gracefully disconnect from the
devices in the testbed as demonstrated in Example 6-3.
Example 6-3 Disconnecting gracefully from the device
class CommonCleanup(aetest.CommonCleanup):
@aetest.subsection
def disconnect_from_devices(self, testbed):
# disconnect_all
for device in testbed:
device.disconnect()
To allow it to run as its own Python executable we can also
include aetest.main() as demonstrated in Example 6-4.
Example 6-4 ???main() allowing for the executable to run as a
standalone function
# for running as its own executable
if __name__ == '__main__':
aetest.main()
It is only recommended to run pyATS testscripts using standard
execution, running aetest directly (aetest.main()), during script
development. This allows for a quick turnaround when testing
code.
We could modify the print statement to test network state as
demonstrated in Example 6-5.
Example 6-5 Testing interfaces for CRC errors
class SimpleTestcase(aetest.Testcase):
@aetest.test
def test_for_input_crc_per_interface(self, t
# configure each device interface
self.failed_interface = {}
for interface,value in self.interfaces.i
print(f"Testing interface { interfac
if value['counters']['in_crc_errors'
print(f"{ interface } failed crc
self.failed_interface = interfac
else:
print(f"{ interface } passed crc
@aetest.test
def pass_or_fail(self):
if self.failed_interface:
self.failed()
else:
self.passed()
Adhering to this modular structure is as easy as keeping all
testscripts broken into the common setup, testcase(s), and
common cleanup sections. Common setup allows for input
scripts validity to be checked, connectivity to the targeted
testbed and devices, bringing up the topology, loading the
device configurations, and the setting up any dynamic looping.
This section runs first, always, before testcases. Example 6-6
demonstrates the start of a pyATS script; the Common Setup
section
Example 6-6 A typical Common Setup example
from pyats import aetest
# define a common setup section by inherting fro
class ScriptCommonSetup(aetest.CommonSetup):
@aetest.subsection
def check_script_arguments(self):
pass
@aetest.subsection
def connect_to_devices(self):
pass
@aetest.subsection
def configure_interfaces(self):
pass
Testcases can now be defined and executed knowing the
environment connected and inputs validated. Each testcase is
defined by inheriting aetest.Testcase class and defining one or
more test sections inside. These testcases run in the order they
are defined in the test script. Each testcase is associated with a
unique ID which defaults to the class name but can be changed
by setting the testcase.uid attribute. The unique ID is used for
reporting purposes. Testcases are independent and the code of
a testcase should be self-contained such that it can run in
isolation with any number of other testcases. Each testcase’s
result is a combined roll-up result of all its child sections. These
results are counted for as 1 in the summary table of results.
Finally, CommonCleanup is the last section to run within each
testscript. Environments and network connections should be
returned to the same state as prior to the script running.
Removal of all CommonSetup changes, removal of lingering
changes, gracefully disconnecting from network devices are all
part of the CommonCleanup phase and ensure for a non-
disruptive approach to testing. CommonCleanup should also be
used as a catch-all regardless of whether individual testcases
clean up after themselves. This section should be used as a
safety net to ensure the testbed returns to a healthy state after
the scripts are completed. Refer back to the AEtest test
infrastructure in Chapter 4, “AETest Test Infrastructure”
Easypy
Easypy provides a standardized runtime environment for
testscript execution in pyATS. It offers a simple, straight-
forward way for users to aggregate testscripts together into
jobs, integrates various pyATS modules together into a
collectively managed ecosystem, and archives all resulting
information for post-mortem debugging:
Jobs: aggregation of multiple testscripts into one job.
TaskLog: stores all runtime log outputs to TaskLog.
E-mail Notification: emails the user result information upon
finishing.
Multiprocessing Integration: executes each jobfile Task in a
child process, and configures the environment to allow for
hands-off forking.
Clean: clean/brings up the current testbed with new images &
fresh configuration.
Plugins: plugin-based design, allowing custom user injections
to alter and/or enhance the current runtime environment.
In pyATS, the aggregation of multiple testscripts together and
executed within the same runtime environment is called a job.
The concept of Easypy revolves heavily around the execution of
such jobs. Each job corresponds to a jobfile: a standard python
file containing the instructions of which testscripts to run, and
how to run them.
During runtime, these testscripts are launched as tasks and
executed through a test harness (eg, aetest). Each task is always
encapsulated in its own child process. The combined tasks in
the same jobfile always share the same system resources, such
as testbeds, devices, and their connections.
All logs, files, and artifacts generated by tasks during the jobfile
runtime are stored in a runinfo folder. After execution has
completed, they are archived into a zip folder. The zip folder is
stored in the following directory:
./users/<userid>/archive/YY-MM/
where YY-MM represents the current year and month in double
digits, providing some level of division/classification between
jobs. The following files are stored in the zip folder:
<job-name>.py: A copy of the jobfile that ran.
<job-name>.report: Copy of the email notification sent to the
submitter
TaskLog.<task-id>: TaskLog: one per jobfile task, where all
messages generated in a task is stored.
JobLog.<job-name>: overall pyats.easypy module log
Testbed.static.yaml: Contents of the --testbed-file, if specified
by the user.
Testbed.clean.yaml: Contents of the --clean_file, if specified
by the user.
Env.txt: A dump of environment variables and cli args of this
Easypy run
Reporter.log: Reporter server log file, contains a trace of XML-
RPC call sequences.
Results.json: JSON result summary file generated by Reporter.
xunit.xm: files containing xUnit-style result reports and
information required by Jenkins. These files are only generated
if --xunit argument is provided to Easypy.
ResultsSummary.xml: XML result summary file generated by
Reporter
ResultsDetails.xml: XML result details file generated by
Reporter
CleanResultsDetails.yaml: YAML clean result details file
generated by Kleenex
Kleenex.<device-name>.log: Job-scope clean details for this
device.
Kleenex_<task-id>.<device-name>.log: Task-scope clean details
for this device.
Testbed and Topology
The topology module is designed to provide an intuitive and
standardized method for users to define, handle and query
testbed/device/interface/link description, metadata, and their
interconnections.
There are two major functionalities to topology module:
1. Defining and describing testbed metadata using YAML,
standardizing the format of the YAML file, and loading it into
corresponding testbed objects.
2. Query testbed topology, metadata and interconnect
information via testbed object attributes and properties.
YAML (short for “YAML Ain’t Markup Language” or “Yet
Another Markup Language”), is a human-readable data
serialization format that is designed to be both human readable
and machine readable.
YAML is indentation and white space sensitive. Its syntax maps
directly to most of the common data structures in Python, such
dict, list, str, and more.
As opposed to creating a module where the topology
information is stored internally, and asking users to query that
information via API calls, pyATS topology module approached
the design from a completely different angle:
Using objects to represent real-world testbed devices
Using object attributes & properties to store testbed
information and meta-data
Using object relationships (references/pointers to other
objects) to represent topology interconnects
Using object references & python garbage collection to clean
up testbed left-overs when objects are no longer referenced.
Testbed and Device Cleaning with Kleenex
Device cleaning (a.k.a. Clean) defines the process of preparing
physical testbed devices by loading them with appropriate
images (recovering from bad images), removing unnecessary
configurations and returning devices to their default initial
state by applying basic configurations such as
console/management IP addresses, etc.
Kleenex offers the base infrastructure required by all clean
implementations:
Integration with Easypy - Runtime Environment and Testbed
Objects
Structured input format & information grouping through a
Clean File
Automatic asynchronous device cleaning
Runtime, exception & logging handling
All of the necessary guidelines and information required for
users to develop their platform specific clean methods.
Kleenex Clean standardizes how users implement platform-
specific clean methods, providing the necessary entry points
and subprocess management.
Asynchronous Library (Parellel Call)
Asynchronous (async) execution defines the ability to run
programs and functions in a non-blocking manner. In pyATS, it’s
recommended to execute in parallel using multiprocessing. The
proper use of multiprocessing can greatly improve the
performance of a program and is only bounded by the physical
number of CPUs and I/O limits. Multiprocessing is
recommended with pyATS and test-driven automation for a
variety of reasons:
Separate memory space: no race conditions (except with
external systems)
Very simple, straightforward code
No Global Interpreter Lock, takes full advantage of multiple
CPU/cores
Child processes are easily interruptible/killable
pyATS also provides an API from the async module, known as
Parellel Call, or pcall, to further abstract and simplify parallel
calls. pcall is an API provided by async module that supports
calling procedures and functions in parallel using
multiprocessing fork, without having to write boilerplate code
to handle the overhead of process creation, waiting and
terminations. pcall supports calling all procedures, functions,
and methods in parallel, if the return of the called target is
pickleable object.
Consider pcall as a shortcut library to multiprocessing, intended
to satisfy most users’ need for parallel processing. However, for
more custom & advanced use cases, stick with direct usages of
multiprocessing.
The pcall API allows users to call any function/method (a.k.a.
target) in parallel. It comes with the following built-in features:
Builds arguments for each child process/instance
Creates, handles, and gracefully terminates all processes
Returns target results in their called order
Re-raises child process errors in the parent process
Data Structures
New data structures have been introduced and maintained as
part of the pyATS infrastructure. These new data structures are
used as part of the pyATS source code, and may prove to be
useful in our users’ day to day coding:
Attribute Dictionaries:
AttrDict: Is the exact same as the Python native dict except
that in most cases you can use the dictionary key as if it was an
object attribute instead.
NestedAttrDict: Special subclass of AttrDict that recognizes
when its key values are other dictionaries and auto-converts
them into further NestedAttrDict
Weak List References: A standard list object stores every
internal object as a direct reference. That is, if the list exists,
then its internally stored objects exist.
Dictionary Represented Using Lists: Accessing nested
dictionaries often calls for recursive functions in order to
properly parse and/or walk through them. This isn’t always
easy to code around. ListDict provides an alternative view on
nested dictionaries, breaking down the value nested within
keys to a simple concept of path and value. This flattens the
nesting into a linear list, greatly simplifying the coding around
nested dictionaries.
Orderable Dictionary:
Python’s built-in collections.OrderedDict only remembers the
order of which keys were inserted into it, and does not allow
users to re-order the keys and/or insert new keys into arbitrary
position in the current key order.
OrderableDict, Orderable Dictionary, is almost exactly the
same as python collections.OrderedDict with the added ability
to order & re-order the keys that are inserted into it.
Logic Testing: Boolean Algebra is sometimes confusing when
used in the context of English language. The goal of this module
is to standardize how to represent and evaluate logical
expressions within the scope of pyATS, and as well offer
standard APIs, classes and behaviors for users leverage.
Configuration Container:
The Configuration container is a special type of NestedAttrDict
intended to store Python module and feature configurations.
Avoid confusing Python configuration with router
configuration. Python configurations tend to be key value pairs
that drive a particular piece of infrastructure, telling it how its
behavior should be.
TCL Integration
This module effectively enables you to make Tcl calls in the
current Python process and is 100% embedded: there’s no child
Tcl process; the actual Tcl interpreter is embedded within the
current Python process, and the process ID (PID) of both Python
and Tcl is the same.
Part of pyATS goal is to enable the testing community to
leverage existing Tcl-based scripts and libraries. In order to
make the integration easier, the Tcl module was created to
extend the native Python-Tcl interaction:
Interpreter class: Extends the native Tcl interpreter by
providing access to ATS-tree packages & libraries.
Two-Way Typecasting: APIs and Python classes, enabling
typecasting Tcl variables to its appropriate Python objects and
back. Including but not limited to: int, list, string, array, keyed
lists etc.
Call history: Maintaining a historical record of Tcl API calls
for debugging purposes.
Callbacks: Callbacks from Tcl to Python code, enabling closer
coupling
Dictionary Access: Accessing Tcl variables as if accessing a
python dictionary.
Magic Q Calls: Calling Tcl APIs as if calling a python object
method, with support for Python *args and **kwargs mapping
to Tcl positional and dashed arguments.
Logging
A log is a log regardless of what kind of prefixes each log
message contains and what format it ended up as, as long as it
is human readable and provides useful information to the user.
The Python logging module’s native ability to handle and
process log messages is more than sufficient for any logging
needs and has always been suggested as the de facto logging
module to use.
Therefore, for all intent and purposes, users of pyATS
infrastructure should always use just the native Python logging
module as-is in their scripts and testcases. Example 6-7
demonstrates some simple logging functions. Users of pyATS
should always use native Python logging module as-is within
their scripts and testcases.
Example 6-7 pyATS logging functions
# import the logging module at the top of your
# setup the logger
import logging
# always use your module name as the logger name
# this enables logger hierarchy
logger = logging.getLogger(__name__)
# use logger:
logger.info('an info message')
logger.error('an error message')
logger.debug('a debug message')
Result Objects
In most test infrastructures, such as pytest and unittest, test
results are only available as pass, fail or error. This works quite
well in unit and simplistic testing. The downside of having only
three result types, however, is the inability to describe testcase
result relationships, or distinguish a test’s genuine failure,
versus a failure of the test script caused by mal-design/mal-
coding (for example the testcase encountered a coding
Exception).
To accommodate complex test environments, pyATS supports
more complicated result types such as “test blocked”, “test
skipped”, “test code errored” etc, and uses objects and object
relationships to describe them. These objects simplify the whole
result tracking and aggregation infrastructure and grant the
ability to easily roll-up results together. Here are the result
objects available:
Passed: Indicating that a test was successful, passing, result
accepted... etc.
Failed: Indicating that a test was unsuccessful, fell short,
result unacceptable... etc.
Aborted: Indicating something was started but was not
finished, was incomplete and/or stopped at an early or
premature stage. For example, a script was killed by hitting
CTRL-C.
Blocked: Used when a dependency was not met, and the
following event could not start. Note that a “dependency”
doesn’t strictly mean order dependency and set-up dependency.
It could also mean cases where the next event to be carried out
is no longer relevant.
Skipped: Used when a scheduled item was not executed and
was omitted. The difference between skipped and blocked is
that skipping is voluntary, whereas blocked is a collateral.
Errored: A mistake or inaccuracy. For example, an
unexpected Exception. The difference between failure and
error is that failure represents carrying out an event as planned
with the result not meeting expectation, whereas errored
means something gone wrong in the course of carrying out that
procedure.
Passx: Short for “passed with exception”. Use with caution:
you are effectively re-marking a failure to passed, even though
there was an exception.
Reporter
Reporter is a package for collecting and generating test reports
in YAML format. This results file contains all the details about
execution (section hierarchy, time, results, etc.)
The results.json report contains hierarchical information about
the pyATS job executed. The top level is the TestSuite which
contains information about the job as a whole. Under the
TestSuite are all of the Task’s that were executed as a part of the
job. Each Task then has the various sections of testing
underneath CommonSetup, CommonCleanup, and Testcase’s.
These then have child sections which can be TestSection,
SetupSection, CleanupSection, and Subsection. The children of
these would be Step, which can be nested with their own
children Step’s.
Being able to parse the generated test reports (results.json)
allows you to further dig into and programmatically analyze
the test results. This allows you to take further action based on
the testing results using an automated workflow.
Utilities
pyATS comes with a variety of additional utilities to enhance
and support the framework.
Find: Search and filter against objects using find API
Secret Strings: Used to protect and encrypt strings (such as
passwords)
Multiprotocol File Transfer Utilities: Transfer files to / from
remote server
Embedded pyATS File Transfer Server: Supports FTP, TFTP,
SCP, HTTP
Import Utilities: Translate ‘x.y.z’ style string into ‘from x.y
import z’; returns z
YAML File Markups: pyATS specific YAML markup; similar to
Django template language
Robot Framework Support
ROBOT Framework is generic Python/Java test automation
open-source framework that focuses on acceptance test
automation by through English-like keyword-driven test
approach.???John: Would add a link to this after the summary.
Also, added “open-source”. Thanks, Stuart
You can now freely integrate pyATS into ROBOT Framework, or
vice versa:
Running ROBOT Framework scripts directly within Easypy,
saving runtime logs under runinfo directory, and aggregating
results into Easypy report.
Leverage pyATS infrastructure and libraries within ROBOT
Framework scripts.
However, as ROBOT Framework support is an optional
component under pyATS, you must install the package explicitly
before being able to leverage it. For more on ROBOT framework
see Chapter 22.
Manifest
This sub-component of pyATS that uses a file with YAML syntax
(the “manifest” file) to capture the runtime requirements, script
arguments and execution profile for test scripts.
The pyATS Manifest is a file with YAML syntax describing how
and where to execute a script. It is intended to formally
describe the execution of a single script, including the runtime
environment, script arguments and the profile(s) that define
environment specific settings and arguments. Profiles allow the
same script to be run against multiple environments or run
with different input parameters. For example, using multiple
testbeds representing different environments
(testing/production) or different scaling numbers to test
scalability and resiliency. A script can be executed via the
manifest using the pyats run manifest command. Manifest
files use the file extension .tem which stands for Test Execution
Manifest. Manifest files can be tracked via source control,
which can help standardize testing environments across
multiple testing scenarios.
Summary
Adopting network automation has never been easier with the
introduction of a modern tool like pyATS and its associated
framework, and the test-driven development methodology. TDD
emerged from the Agile manifesto and is a common form of
software development that can be extended to network
automation. Network engineers will gather business
requirements and transform them into testcases. These
testcases are small unit tests that at first fail when they are
written. The smallest, simplest amount of code possible is
applied to make the test pass, all the while all other tests remain
passing, and is refactored until the developer is satisfied with
the passing test. This iterative approach is performed for each
use case until test coverage is established for all business
requirements.
The iterative essence of Test-Driven Development (TDD) is a
cornerstone that fosters a disciplined, incremental, and
feedback-driven approach to both software and network
automation development. This iterative process, initiated from
gathering business requirements which are then transmuted
into test cases, is at the heart of promoting a robust and reliable
network automation culture. The rhythm of writing a failing
test, making it pass with the simplest code, and then refining
the code, embodies a cycle of continuous improvement and
validation.
Evolution of Tests: As development progresses, tests evolve in
tandem. Initially, tests are rudimentary, focusing on basic
functionalities. Over time, as more features are integrated and
complexities arise, tests become more comprehensive and
nuanced. This evolution of tests is a natural reflection of the
growing understanding and unfolding of business
requirements.
Improved Code Quality: One of the stellar benefits of this
iterative approach is the uplift in code quality. Each cycle of
TDD pushes the code through a crucible of validation, ensuring
that it not only meets the immediate requirement but is also
robust and resilient to potential issues. The refactoring stage, an
integral part of the TDD cycle, further polishes the code,
enhancing its readability, efficiency, and maintainability.
Prompt Identification and Resolution of Issues: The TDD
cycle facilitates early detection of discrepancies and bugs. As
each piece of functionality is validated through tests before
being integrated, issues are spotted and rectified promptly. This
proactive error detection significantly reduces the time and
effort that would otherwise be expended in debugging and
fixing problems later in the development process.
Increased Confidence: The rigorous validation imbued by
TDD instills a higher degree of confidence in the reliability and
accuracy of network automation solutions. The iterative testing
and refactoring provide a safety net that ensures each
increment of development solidifies the solution rather than
introducing regressions.
Enhanced Understanding and Documentation: The iterative
testing process also acts as a documentation of what the code is
supposed to achieve. Each test case elucidates the business
requirements and the expected behavior of the system, thereby
enriching the understanding of the system among the
development and operations teams.
Facilitation of Change: The iterative nature of TDD, coupled
with the comprehensive suite of tests, provides a sturdy
foundation for accommodating changes. Whether adapting to
evolving business requirements or integrating new
technologies like pyATS, the TDD approach ensures that the
system remains robust and reliable.
By marrying the TDD methodology with modern tools like
pyATS, network engineers are poised to harness a powerful
synergy that accelerates the adoption of network automation,
while concurrently elevating the quality, reliability, and
adaptability of the solutions crafted. Through this iterative and
validation-centric approach, the journey of network
automation becomes a structured, manageable, and rewarding
endeavor.
References
1
Robert “Uncle Bob” Martin’s TDD Rules:
http://blog.cleancoder.com/uncle-
bob/2014/12/17/TheCyclesOfTDD.html
Chapter 7. Automated Network
Documentation
Early in my career I was tasked by the Finance department to
capture the information of a new pair of Cisco 6500 Catalyst
switches with various supervisors and line cards. The naïve
junior engineer that I was at the time brought them pages of
print out of the show inventory command! “We can’t use this—
we need something that is business-ready” words that are
forever etched in my memory. Nearly fifteen years later I was
able to automatically transform that show inventory command
from raw unusable output into a “business-ready” comma-
separated values (CSV) spreadsheet with pyATS jobs and a
templating language called Jinja2. Automated network
documentation is the recommended starting place for anyone
who is brand new to network automation or pyATS primarily
because it is safe. No changes are being made; no modifications
to the network or to the configuration; all we are doing is
running and parsing show commands. As trivial as this seems,
it can become the foundation for a source of truth in a Git-
tracked repository showing state and configuration change
history. The answer to the question “what has changed?”
becomes extremely obvious using your IDE and reviewing the
Git change history over the business ready reports. All of these
report formats have VS Code extensions that allow for direct
viewing and integration with tools like Excel Preview,
Markdown Preview, or Open in Browser for HTML pages.
This chapter covers the following topics:
Introduction to pyATS jobs
Running pyATS jobs from the CLI
pyATS job CLI logs
pyATS logs HTML viewer
Jinja2 templating
Business-ready documents
Introduction to pyATS Jobs
In pyATS, the aggregation of multiple testscripts together and
executed within the same runtime environment is called a job.
The concept of Easypy revolves heavily around the execution of
such jobs. Each job corresponds to a job file: a standard python
file containing the instructions of which test scripts to run, and
how to run them. During runtime, these test scripts are
launched as tasks and executed through a test harness (eg,
aetest). Each task is always encapsulated in its own child
process. The combined tasks in the same job file always share
the same system resources, such as testbeds, devices and their
connections.
Table 7-1 outlines the type and purpose of pyATS testbed, script,
and job files.
Table 7-1 pyATS Files
Job files are the bread and butter of Easypy. They allow
aggregation of multiple testscripts to run under the same
environment as tasks, sharing testbeds, and archiving their logs
and results together. A job file is an excellent method to batch
and/or consolidate similar test scripts together into relevant
result summaries. Job files are required to satisfy the following
criteria:
Each job file must have a main() function defined. This is the
main entry point of a job file run.
The main() function accepts an argument called runtime.
When defined, the engine automatically passes the current
runtime object in.
Inside the main() function, use easypy.run() or easypy.Task() to
define and run individual testscripts as Tasks.
The name of the job file, minus the .py extension, becomes this
job’s reporting name. This can be modified by setting
runtime.job.name attribute.
Figure 7-1 illustrates a example of a pyATS job file.
Figure 7-1 A Sample pyATS Job file
Job files are provided as the only mandatory argument to the
easypy launcher. As outlined earlier in Table 7-1, pyATS jobs
typically have three files:
1. The <test>_job.py file – this is the pyATS job file executed
using the pyats run job <job> --testbed-file <testbed>
command.
2. The Python <test>.py file – the main script where all of the
testing logic is contained.
3. A valid testbed file – target topology for the job and script.
The pyATS script inside a job is typically broken into three
major areas as Python Classes:
1. Common Setup
a. Connections to the topology are made
b. Any tests that should be marked for looping are marked
c. Any customized common setup to occur before testing starts
2. Tests
a. N+1 tests cases
b. Happen after connectivity to topology establsihed
c. Marked as Pass, Failed, Skipped
3. Common Cleanup
a. Disconnect from the topology gracefully
b. Leave environment in same state as when origianlly
connected
c. Any custom cleanup acitivities
Refer to Chapter 4, “AETest Test Infrastructure” for a deep dive
into pyATS jobs.
Figure 7-2 illustrates pyATS job structure while Figure 7-3
illustrates pyATS job structure as classes.
Figure 7-2 pyATS Job Structure
Figure 7-3 pyATS Structure as Classes
In the preceding example we first, in the common setup section,
connect to our topology and mark the test case for looping in
case there is more than one device in the testbed file we want to
document. We will complete this first test case next, and then
we gracefully disconnect from the topology in the common
cleanup section.
For our first test, let’s capture the parsed JSON version of show
ip interface brief and simply save it to a file locally. From this
JSON file we will build all our other Jinja2 templates. pyATS
includes many abstractions in the form of Application
Programmable Interfaces (API). There is an API to save both
raw text and dictionaries to files.
Figure 7-4 pyATS APIs - Save
Using the pyATS .parse() and save_dict_to_json_file() we can
setup our first test as shown in Example 7-1.
Example 7-1 Example parsing JSON and saving to a flile
class Parse_And_Save_Show_IP_Interface_Brief_to_
"""Capture Show IP Interface Brief and trans
@aetest.test
def setup(self,testbed,device_name):
self.device = testbed.devices[device_nam
parsed_show_ip_interface_brief = self.de
p _ _ p_ _
brief")
self.device.api.save_dict_to_json_file(data=list
alues()),filename="Show IP Interface Brief.json"
The complete automated_network_documentation.py script
should now look like Example 7-2. Note that for simplicity we
are simply using testbed.connect() and .disconnect() to establish
and tear down connectivity to all devices in the testbed, without
the need for a loop.
Example 7-2 automated_network_documentation.py
import json
import logging
from pyats import aetest
## Setup Logging
log = logging.getLogger(__name__)
## Common Setup
class common_setup(aetest.CommonSetup):
"""Common Setup Section"""
#Connect to testbed
@aetest.subsection
def connect_to_devices(self,testbed):
testbed.connect()
#Mark tests for loops
@aetest.subsection
def loop_mark(self,testbed):
aetest.loop.mark(Parse_And_Save_Show_IP_
device_name=testbed.devices)
# Test Cases
class Parse_And_Save_Show_IP_Interface_Brief_to_
"""Capture Show IP Interface Brief and trans
@aetest.test
def setup(self,testbed,device_name):
#Set current device in loop to self.devi
self.device = testbed.devices[device_nam
#Parse show ip interface brief to JSON
self.parsed_show_ip_interface_brief = se
interface brief")
#Save JSON to file
self.device.api.save_dict_to_json_file(data=list
ief.values()),filename="Show IP Interface Brief.
#Common Cleanup
class common_cleanup(aetest.CommonCleanup):
"""Common Cleanup Section"""
p
@aetest.subsection
#Disconnect from devices
def disconnect_from_devices(self,testbed):
testbed.disconnect()
Example 7-3 shows the testbed file for the Cisco DevNet Always-
On IOS-XE Sandbox we can use to test.
Example 7-3 testbed.yaml for the Cisco DevNet Always-On IOS-
XE Sandbox
---
devices:
Cat8000V:
alias: "Sandbox Router"
type: "router"
os: "iosxe"
platform: Cat8000v
credentials:
default:
username: admin
password: C1sco12345
connections:
cli:
protocol: ssh
ip: devnetsandboxiosxe.cisco.com
port: 22
arguments:
connection_timeout: 360
And finally, the pyATS job file—
automated_network_documentation_job.py as shown in
Example 7-4.
Example 7-4 pyATS Job file –
automated_network_documentation_job.py
import os
from genie.testbed import load
def main(runtime):
if not runtime.testbed:
# If no testbed is provided
testbedfile = os.path.join('testbed.yaml
testbed = load(testbedfile)
else:
testbed = runtime.testbed
testscript = os.path.join(os.path.dirname(__
'automated_network_documentation.py')
runtime.tasks.run(testscript=testscript, tes
Next, we will examine how to execute, or run, this pyATS job
that captures the show ip interface brief output as JSON and
then using a pyATS API, saves the JSON to a local file,
automatically from the command-line interface (CLI). Note that
pyATS jobs can be scheduled and executed graphically inside of
xPresso, the topic of Chapter 20, “xPresso.”
Running pyATS Jobs from the CLI
pyATS jobs can be executed from the command-line which
provides real-time connectivity details in the runtime logs, a
summary of the job and its tasks outcomes, as well as the
command to start the HTML log viewer. By default, standard
output logging is enabled when you use the testbed.connect()
function which provides verbose logs for every step of the job.
This is an option you can toggle with the log_stdout=False
option inside testbed.connect(log_stdout=False) should you
want less verbose output from the logs.
From the command-line pyATS jobs are run using the following
command:
$ pyats run job <job name> --testbed-file <testbe
However it should be noted that pyATS jobs have a lot of
arguments and options that can be passed to it as demonstrated
in Figure 7-5 through Figure 7-9.
Figure 7-5 pyATS job arguments
Figure 7-6 pyATS job arguments
Figure 7-7 pyATS job arguments
Figure 7-8 pyATS job arguments
Figure 7-9 pyATS job arguments
Makes sure all three files (job file, script file, and testbed file)
are saved, identify any optional arguments you want to try such
as e-mail or Webex or verbosity, and run the pyATS job. The
three files are the automated_network_documentation_job.py
(job file); automated_network_documentation.py (the script);
and testbed.yaml (the testbed):
(virtual_environment)$pyats run job automated_net
testbed-file testbed.yaml
pyATS Job CLI Logs
pyATS will start the job run and start printing Easypy logs to the
screen. The name of the job and the running directory are
printed first. The Clean Information (covered in Chapter 15,
“pyATS Clean”), if any, is displayed first under its own logging
banner. If the –check-all-devices-up argument was passed
along pyATS will check that all devices are up and ready first, or
log that this was disabled, before proceeding to the common
setup section. The common setup banner will display and the
subsections, in our case connect to the devices and mark test
cases for looping, should be performed and marked as PASSED.
Our testcases will then display their banners and perform the
testcases. Figure 7-10 shows the CLI logs from the start of a
pyATS job.
Figure 7-10 pyATS job CLI logs
As of this stage of the job you should now have a local file called
Show IP Interface Brief.json inside your local directory. If you
open it and enable formatting in VS Code
(https://code.visualstudio.com/docs/python/formatting), it
should look like Figure 7-11.
Figure 7-11 Automated JSON network state file
Next, the common cleanup section banner appears in the logs
and the devices are disconnected gracefully. If WebEx
arguments were included the notification would also occur at
this step. An archive of the job log is created and then an
Easypy Report is displayed (see Figure 7-12) indicating:
The pyATS instance and version
The CLI arguments passed
The user and local environment (host and OS) information
The name of the job
Start and stop time
Elapsed time
Archive location
Total tasks
Overall stats
Passed
Passx
Failed
Aborted
Blocked
Skipped
Errored
Total tasks
Success rate %
Figure 7-12 Easypy Report
Finally, the job summary information is displayed as illustrated
in Figure 7-13. A complete breakdown step by step, task by task,
Class by Class, is displayed with the results of each step in the
job and task in the step. Notice the “pro tip” at the end of the
output suggesting you can run the pyats logs view command to
launch the HTML logs viewer.
Figure 7-13 Summary of pyATS job in the CLI logs
pyATS Logs HTML Viewer
As indicated by the “pro tip” there is a built-in HTML enriched
log viewer from pyATS you can launch at the end of a job with
the following command:
(virtual_environment)$ pyats logs view
Figure 7-14 shows launching the HTML viewer and Figure 7-15
shows the results.
Figure 7-14 Launching pyATS logs viewer
Figure 7-15 pyATS log viewer default results page
A browser window will spawn inside the pyATSLiveView
HTML logs viewer. The default page shows the results in
chronological order. Light or dark mode can be set and results
can be searched. Clicking on the results of our
automated_network_documentation_job.py job bring you into
the detailed logs from that job as illustrated in Figure 7-16.
Figure 7-16 pyATS logs viewer details - results
Each result can be expanded and selected to see the CLI logs
(see Figure 7-17). These results can be easily copied to the
clipboard provided.
Figure 7-17 pyATS logs viewer details – results - highlighted
The overview tab provides an overview of the pyATS job as
illustrated in Figure 7-18.
Figure 7-18 pyATS logs viewer details – overview
And the files tab provides access to all the files related to the
job, including console logs as illustrated in Figure 7-19.
Figure 7-19 pyATS logs viewer details – files
Both the verbose command line and HTML logs viewer
capabilities of pyATS set it apart from other network
automation frameworks. pyATS boasts exceptional logging
capabilities that distinguish it from its counterparts. At the
command-line interface (CLI), pyATS provides detailed and
customizable logging outputs, ensuring that users receive
precise feedback during test execution and troubleshooting.
This granularity in logging is instrumental for engineers
looking to track and pinpoint issues. Additionally, the
framework offers an integrated HTML log viewer, which
presents test results in an intuitive and visually appealing
manner. This graphical representation not only simplifies the
process of analyzing results but also enables users to quickly
identify anomalies or areas of concern. Together, these logging
features make pyATS a powerful tool for network professionals,
setting it a notch above other frameworks in terms of
debugging and result interpretation.
Jinja2 Templating
Every problem is a nail and Jinja2 templates can be your
hammer! Jinja2 is a modern and designer-friendly templating
engine for Python programming languages. It’s utilized in
various applications to generate content quickly and efficiently
from data structures, such as rendering HTML pages in web
applications. In the context of Jinja2 templates, the primary
emphasis is on the provision of placeholders and control
structures within templates, allowing dynamic content to be
inserted or altered at runtime. This is achieved using a
combination of template tags, encapsulated by {{ ... }} for
expressions and {% ... %} for statements, which instruct the
Jinja2 engine on how to process and render the final output.
In relation to pyATS Jinja2 templates play a pivotal role in
crafting test cases and configuration structures. Given that
networks can be intricate with numerous device types,
configurations, and protocols, having a static testing or
configuration approach is impractical. Instead, pyATS leverages
Jinja2 to create dynamic, data-driven templates. Engineers can
create a base template for a network device configuration or
test scenario, and then use variables to adjust specific values
based on the target device, protocol, or environment. Once the
data is fed into the Jinja2 template, a fully rendered
configuration or test case, tailored to the specific requirements,
is produced. This integration allows for immense scalability and
flexibility, ensuring that network testing and automation with
pyATS can be as comprehensive and adaptable as necessary.
Jinja2 is one of Python’s most popular templating engines and is
extended to many network automation frameworks like Ansible
and pyATS. pyATS, like parsing data or saving a file, has several
Jinja2 API abstractions available as Figure 7-20 illustrates.
Figure 7-20 pyATS Jinja2 APIs
Jinja2 templates can be used easily, to transform the structured
JSON data captured by the pyATS parser. In our first pyATS job,
we captured the JSON. Let’s use various approaches, including
templating with Jinja2, to create business ready documents.
Business-Ready Documents
Early in my career, very early, I was asked by the Finance
department to get an inventory from the new 6500 core routers,
line cards, and various components for their records. With little
thought, I printed the output of a show inventory command
and provided them with the print uts. They were not very
impressed and asked if I could provide something that is
business ready for them; a spreadsheet.
Almost 15 years later, I realized that using pyATS and Jinja2
templates I could automate the process of creating these so-
called business ready documents. A self-documenting network
if you will. And one thing that always seems to be missing is
good, current, network documentation. In this section we will
explore various file types and ways to use pyATS to capture
network state as structured JSON; and then how we can work
with that structured data to generate our automated business
ready documents.
JSON
Our job already captures the JSON representation of show ip
interface brief for the Always-On DevNet IOS-XE Sandbox. It
should be noted that this scale vertically for more devices by
simply scaling the number of devices in the testbed.yaml file.
Horizontal scaling across more show commands is a simple
matter of copying the existing testcase, updating the command
and filename.
YAML
YAML Ain’t Markup Language is another good network
automation, infrastructure as code, data serialization format.
Since we have the JSON we can simply transform it into YAML
and save the output to a YAML file. Add a new testcase that
converts the JSON to a YAML file as demonstrated in Example 7-
4. Note that we are simply adding another loop marker and
testcase inside the automated_network_documentation.py
script.
Example 7-4 Transform JSON to YAML
import yaml
#Mark tests for loops
@aetest.subsection
def loop_mark(self,testbed):
aetest.loop.mark(Parse_And_Save_Show_IP_Inte
device_name=testbed.devices)
aetest.loop.mark(Parse_And_Save_Show_IP_Inte
device_name=testbed.devices)
# Test Cases - YAML file
class Parse_And_Save_Show_IP_Interface_Brief_to_
"""Capture Show IP Interface Brief and trans
@aetest.test
def save_yaml_file(self,testbed,device_name)
#Set current device in loop to self.devi
p
self.device = testbed.devices[device_nam
#Parse show ip interface brief to JSON
self.parsed_show_ip_interface_brief = se
interface brief")
#Convert to YAML
yaml_show_ip_interface_brief =
yaml.dump(self.parsed_show_ip_interface_brief, d
#Save YAML to file
with open("Show IP Interface Brief.yaml"
yml_file.write(yaml_show_ip_interfac
Now, in addition to the JSON file, you should have a new YAML
file and a new set of passing tests in your pyATS jobs! These
YAML files are interactive in VS Code and can be expanded or
collapsed and are generally easier to read than JSON files as
illustrated in Figure 7-21
Figure 7-21 show ip interface brief as YAML
Comma-Separated Values
Arguably the most powerful business ready document format is
the CSV file. These files are supported in Excel or Excel Preview
for VS Code which allows you to sort, filter, re-order, and
perform powerful visualizations all the while being extremely
simple to create from JSON structured data. Make a new test
case as follows to create the CSV file. This testcase will be added
to the automated_network_documentation.py script. The
“aetest.loop.mark” line should be within the common setup
with the other loop markers. Note the use of the
load_jinja_template() pyATS API to render the CSV file in
Example 7-5.
Example 7-5 Transform JSON to CSV using a jinja2 template
aetest.loop.mark(Parse_And_Save_Show_IP_Interfac
device_name=testbed.devices)
# Test Cases - CSV File
class Parse_And_Save_Show_IP_Interface_Brief_to_
"""Capture Show IP Interface Brief and trans
@aetest.test
def save_csv_file(self,testbed,device_name):
#Set current device in loop to self.devi
self.device = testbed.devices[device_nam
_
#Parse show ip interface brief to JSON
self.parsed_show_ip_interface_brief = se
interface brief")
# Load the Jinja2 template
csv_show_ip_interface_brief =
self.device.api.load_jinja_template(path="", fil
to_parse_interfaces=self.parsed_show_ip_interfac
#Save CSV to file
with open("Show IP Interface Brief.csv",
csv_file.write(csv_show_ip_interface
The Jinja2 template is only a few lines of code: the header row,
separated by commas, and, inside a loop, a comma separated
row of the data fields aligned with their header row. Think of it
as a grid of columns and rows with individual cells. The csv.j2
file looks like Example 7-6/
Example 7-6 The csv.j2 Jinja2 template
Interface,IP Address,Status,Protocol,Method,Inte
{% for interface in to_parse_interfaces %}
{{ interface }},{{ to_parse_interfaces[interface
_p _
to_parse_interfaces[interface].status }},{{
to_parse_interfaces[interface].protocol }},{{
to_parse_interfaces[interface].method }},{{
to_parse_interfaces[interface].interface_is_ok }
{% endfor %}
Resulting in an easy to read and universally appreciated
spreadsheet as illustrated in Figure 7-22.
Figure 7-22 show ip interface brief as CSV
With this base csv.j2 file, most other tabular formats can be
created using find / replace in your IDE.
Markdown: Tables
Markdown is another lightweight data encoding format that
can be used to produce various visualizations of structured
data. The csv.j2 template can be copied and modified as
markdown_table.j2 by replacing the commas with pipes ( | )
and by adding a delimiter row. This markdown table renders
inside VS Code as well as GitHub and other markdown-friendly
environments. Example 7-7 shows how to add another loop
marker, again to be added to the “def loop_mark” method inside
the “class common setup”. Most of the actual testcase can be
resued from the CSV exmple.
Example 7-7 Transform CSV to Markdown with pipes and a
delimiter
aetest.loop.mark(Parse_And_Save_Show_IP_Interfac
device_name=testbed.devices)
# Testcases – Markdown Table File
class Parse_And_Save_Show_IP_Interface_Brief_to_
"""Capture Show IP Interface Brief and trans
documentation"""
@aetest.test
def save_markdown_table_file(self,testbed,de
#Set current device in loop to self.devi
self.device = testbed.devices[device_nam
#Parse show ip interface brief to JSON
self.parsed_show_ip_interface_brief = se
interface brief")
# Load the Jinja2 template
md_table_show_ip_interface_brief =
self.device.api.load_jinja_template(path="", fil
to_parse_interfaces=self.parsed_show_ip_interfac
#Save Markdown Table to file
with open("Show IP Interface Brief.md",
md_file.write(md_table_show_ip_inter
The markdown table has a title for the first row, followed by a
leading and trailing pipe ( | ) and replacing all commas with
space padded pipes ( | ). The second line in the file is a header
row much like the CSV file except padded with pipes. Next, we
need a delimiter ( | ----- | ) row enclosed in pipes. These
delimiters need to match the exact number of characters as the
header row. Finally, our data rows inside the loop padded with
pipes as Example 7-8 shows.
Example 7-8 The markdown_table.j2 Jinja2 template
# Show IP Interface Brief
| Interface | IP Address | Status | Protocol | M
| --------- | ---------- | ------ | -------- | -
{% for interface in to_parse_interfaces %}
_p _
| {{ interface }} | {{ to_parse_interfaces[inter
to_parse_interfaces[interface].status }} | {{
to_parse_interfaces[interface].protocol }} | {{
to_parse_interfaces[interface].method }} | {{
to_parse_interfaces[interface].interface_is_ok }
{% endfor %}
This renders as an easy-to-use table as illustrated in Figure 7-23
Figure 7-23 show ip interface brief as Markdown Table
Markdown is also extensible with other tools such as markmap,
a markdown to mindmap tool, and Mermaid, a special type of
markdown used to create diagrams and more. Let’s make a
markmap mind map next with a few modifications to the base
csv.j2 Jinja2 template. Markdown files can be rendered, or
previewed, inside VS Code by clicking the Open Preview button
or right-clicking the show_IP_Interface_Brief.md file, and
selecting Open with ... and select Markdown Preview.
Markdown: Markmap Mind Maps
Markmap mind maps are interactive, colorful, visual
representation of your structured data with collapsing
capabilities and zoom controls. Markmap can be used as a VS
Code extension and allows you to render and export to HTML
the mark down mind maps (see Example 7-9). Make a new
testcase copying and updating any of the previous examples.
Make sure you change the name of this file by adding “Mind
Map.md” otherwise you will overwrite the previous markdown
table example.
Example 7-9 Transform JSON to markmap Mind Maps
aetest.loop.mark(Parse_And_Save_Show_IP_Interfac
device_name=testbed.devices)
# Testcases – Markdown Mind Map File
class Parse_And_Save_Show_IP_Interface_Brief_to_
"""Capture Show IP Interface Brief and trans
documentation"""
@aetest.test
def save_markdown_mindmap_file(self,testbed,
#Set current device in loop to self.devi
self.device = testbed.devices[device_nam
#Parse show ip interface brief to JSON
self.parsed_show_ip_interface_brief = se
interface brief")
# Load the Jinja2 template
md_mindmap_show_ip_interface_brief =
self.device.api.load_jinja_template(path="", fil
to_parse_interfaces=self.parsed_show_ip_interfac
#Save Markdown Table to file
with open("Show IP Interface Brief Mind
md_file.write(md_mindmap_show_ip_int
The markdown mind map has a title for the first row which will
act as the “root” of the horizontally scaling mind map.
Markmaps use levels of nested # symbols, up to six levels deep
per nesting, which transform into ‘branches’ off the root level
above creating a visual mind map of the information (see
Example 7-10).
Example 7-10 The markdown_mindmap.j2 Jinja2 template
# Show IP Interface Brief
{% for interface in to_parse_interfaces %}
## {{ interface }}
### IP Address: {{ to_parse_interfaces[interface
### Status: {{ to_parse_interfaces[interface].st
### Protocol: {{ to_parse_interfaces[interface].
### Method: {{ to_parse_interfaces[interface].me
### Is Interface OK: {{ to_parse_interfaces[inte
{% endfor %}
Click on the Extensions tab in VS Code and search for
markmap as illustrated in Figure 7-24.
Figure 7-24 Install markmap for VS Code
Then, on any valid markdown file (including our previous
tabular markdown file), you can click the markmap button in
VS Code to preview the file as a mind map as illustrated in
Figure 7-25. Figure 7-26 shows the rendered Markmap mind
map while Figure 7-27 shows how even the tabular markdown
can be rendered as a Markmap
Figure 7-25 Render as markmap mind map button
Figure 7-26 show ip interface brief as a markmap mind map
Figure 7-27 show ip interface brief table as a markmap mind
map
Mind maps are a whole new way to interactively visualize your
pyATS network state with simple markdown formatting inside
Jinja2 templates. Let’s take a look at another type of markdown
known as Mermaid.
Markdown: Mermaid Flowcharts
Mermaid, https://mermaid.js.org/, is a JavaScript-based
diagramming and charting tool that enables developers and
content creators to generate visualizations using simple text-
based definitions. Integrated within Markdown, Mermaid
allows for the creation of flowcharts, sequence diagrams, class
diagrams, state diagrams, Gantt charts, and more, directly
within documentation, wikis, or other Markdown-supported
platforms. Instead of embedding static images, users can embed
live diagrams which are rendered on-the-fly. This integration
offers a seamless way to incorporate visual aids into textual
content, making complex ideas easier to convey and
understand. The Mermaid syntax is both concise and readable,
ensuring that even those unfamiliar with diagramming can
quickly grasp its structure and start creating their own visual
representations. Using the pyATS JSON we can represent the
show ip interface brief command in various ways using
Mermaid as demonstrated in Example 7-11.
Example 7-11 Transform JSON Mermaid flowchart
aetest.loop.mark(Parse_And_Save_Show_IP_Interfac
device_name=testbed.devices)
# Testcases – Markdown Mermaid Flowchart File
class
Parse_And_Save_Show_IP_Interface_Brief_to_MD_Mer
"""Capture Show IP Interface Brief and trans
Flowchart documentation"""
@aetest.test
def save_markdown_mermaid_flowchart_file(sel
#Set current device in loop to self.devi
self.device = testbed.devices[device_nam
#Parse show ip interface brief to JSON
self.parsed_show_ip_interface_brief = se
interface brief")
# Load the Jinja2 template
j p
md_mermaid_flowchart_show_ip_interface_b
self.device.api.load_jinja_template(path="",
file="markdown_mermaid_flowchart.j2",
to_parse_interfaces=self.parsed_show_ip_interfac
#Save Markdown Table to file
with open("Show IP Interface Brief Merma
md_file:
md_file.write(md_mermaid_flowchart_s
Each Mermaid type has its own header and structure. When
you’re embedding Mermaid diagrams in a Markdown file, you
typically wrap the Mermaid syntax in a code block with the
language identifier mermaid. Example 7-12 shows the Jinja2
used to make a flowchart for show ip interface brief.
Example 7-12 The markdown_mermaid_flowchart.j2 Jinja2
template
{% for interface in to_parse_interfaces %}
```mermaid
flowchart LR
{{ interface }}[{{ interface }}]
{{ interface }} --> {{ to_parse_interfaces[i
Address: {{ to_parse_interfaces[interface].ip_ad
_p _ p_
{{ interface }} --> {{ to_parse_interfaces[i
to_parse_interfaces[interface].status }}]
{{ interface }} --> {{ to_parse_interfaces[i
{{ to_parse_interfaces[interface].protocol }}]
{{ interface }} --> {{ to_parse_interfaces[i
to_parse_interfaces[interface].method }}]
{{ interface }} --> {{ to_parse_interfaces[i
}}[Interface is OK: {{ to_parse_interfaces[inter
```
{% endfor %}
Click on the Extensions tab in VS Code and search for
Mermaid as shown in Figure 7-28.
Figure 7-28 Install Mermaid support for VS Code
Then, on any valid markdown file (including our previous
tabular markdown file), you can click the markmap button in
VS Code (see Figure 7-29) to preview the file as a mind map as
illustrated in Figure 7-30.
Figure 7-29 Render as Mermaid Preview button
Figure 7-30 show ip interface brief as Mermaid flow chart
Mermaid flow charts are just one example of the powerful
Javascript-enabled Markdown format. Let’s take a look at Class
diagrams, which are particularly useful for automated network
documentation.
Markdown: Mermaid Class Diagrams
"The class is the basic logical entity in the UML. It defines both
the data and the behaviour of a structural unit. A class is a
template or model from which instances or objects are created
at run time. When we develop a logical model such as a
structural hierarchy in UML we explicitly deal with classes.." -
Database Modeling in UML (Unified Modeling Language)
(methodsandtools.com)
UML Class diagrams can be automated from the pyATS JSON
structured data using Jinja2 templates and Mermaid Class
diagrams as demonstrated in Example 7-13.
Example 7-13 Transform JSON Mermaid class diagram
aetest.loop.mark(Parse_And_Save_Show_IP_Interfac
device_name=testbed.devices)
# Testcases – Markdown Mermaid Class File
class
Parse_And_Save_Show_IP_Interface_Brief_to_MD_Mer
"""Capture Show IP Interface Brief and trans
documentation"""
@aetest.test
def save_markdown_mermaid_class_file(self,te
#Set current device in loop to self.devi
self.device = testbed.devices[device_nam
#Parse show ip interface brief to JSON
self.parsed_show_ip_interface_brief = se
interface brief")
# Load the Jinja2 template
md_mermaid_class_show_ip_interface_brief
self.device.api.load_jinja_template(path="", fil
to_parse_interfaces=self.parsed_show_ip_interfac
#Save Markdown Table to file
with open("Show IP Interface Brief Merma
md_file.write(md_mermaid_class_show_
The Class Diagram is like the Flowchart diagram but does have
its own syntax as shown in Example 7-14.
Example 7-14 The markdown_mermaid_class.j2 Jinja2 template
{% for interface in to_parse_interfaces %}
```mermaid
classDiagram
class Interface {
+String name
+String ipAddress
+String status
+String protocol
+String method
+bool interfaceIsOk
}
note for {{ interface }} "{{ interface }}\nI
to_parse_interfaces[interface].ip_address }}\nSt
to_parse_interfaces[interface].status }}\nProtoc
to_parse_interfaces[interface].protocol }}\nMeth
to_parse_interfaces[interface].method }}\nInterf
to_parse_interfaces[interface].interface_is_ok }
```
{% endfor %}
Figure 7-31 shows the rendered Mermaid markdown Class
diagram
Figure 7-31 show ip interface brief as Mermaid Class Diagram
Using pyATS, Jinja2, and Mermaid you could conceivably fully
implement UML for network documentation.
Markdown: Mermaid State Diagrams
In addition to flowcharts and Class Diagrams, Mermaid allows
for State diagrams as well. Example 7-15 demonstrates how to
create a new test that will use Jinja2 to generate a State
diagram.
Example 7-15 Transform JSON Mermaid state diagram
aetest.loop.mark(Parse_And_Save_Show_IP_Interfac
device_name=testbed.devices)
# Testcases – Markdown Mermaid State File
class
Parse_And_Save_Show_IP_Interface_Brief_to_MD_Mer
"""Capture Show IP Interface Brief and trans
documentation"""
@aetest.test
def save_markdown_mermaid_state_file(self,te
#Set current device in loop to self.devi
self.device = testbed.devices[device_nam
#Parse show ip interface brief to JSON
self.parsed_show_ip_interface_brief = se
interface brief")
# Load the Jinja2 template
md_mermaid_state_show_ip_interface_brief
self.device.api.load_jinja_template(path="", fil
to_parse_interfaces=self.parsed_show_ip_interfac
#Save Markdown Table to file
with open("Show IP Interface Brief Merma
md_file.write(md_mermaid_state_show_
The State Diagram, again, has its own syntax as shown in
Example 7-16.
Example 7-16 The markdown_mermaid_state.j2 Jinja2 template
{% for interface in to_parse_interfaces %}
```mermaid
stateDiagram-v2
state {{ interface }} {
[*] --> {{ to_parse_interfaces[inter
state {{ to_parse_interfaces[interfa
{{ to_parse_interfaces[interface].st
{{ to_parse_interfaces[interface].st
{{ to_parse_interfaces[interface].st
Shutdown
}
```
{% endfor %}
Figure 7-32 illustrates the rendered Mermaid Markdown State
diagram.
Figure 7-32 show ip interface brief as Mermaid State Diagram
Markdown – Mermaid Entity Relationship Diagrams
Along with flow charts, class diagrams, and state diagrams, we
can also represent entity relationships in Mermaid diagrams.
“An entity–relationship model (or ER model) describes
interrelated things of interest in a specific domain of
knowledge. A basic ER model is composed of entity types
(which classify the things of interest) and specifies relationships
that can exist between entities (instances of those entity types).”
- Peter Chen
http://faculty.ndhu.edu.tw/~wpyang/DatabaseTeachingCenter/Fil
e2AdvancedDB/4References/erd.pdf
Network state as entity relationship diagrams are extremely
powerful as they are easy to make with Mermaid as
demonstrated in Example 7-17.
Example 7-17 Transform JSON Mermaid entity relationship
diagram
aetest.loop.mark(Parse_And_Save_Show_IP_Interfac
tionship, device_name=testbed.devices)
# Testcases – Markdown Mermaid Entity Relationsh
class
Parse_And_Save_Show_IP_Interface_Brief_to_MD_Mer
_ _ _ _ _ _ _ _ _
estcase):
"""Capture Show IP Interface Brief and trans
Relationship documentation"""
@aetest.test
def save_markdown_mermaid_entity_relationshi
#Set current device in loop to self.devi
self.device = testbed.devices[device_nam
#Parse show ip interface brief to JSON
self.parsed_show_ip_interface_brief = se
interface brief")
# Load the Jinja2 template
md_mermaid_entity_relationship_show_ip_i
self.device.api.load_jinja_template(path="",
file="markdown_mermaid_entity_relationship.j2",
to_parse_interfaces=self.parsed_show_ip_interfac
#Save Markdown Table to file
with open("Show IP Interface Brief Merma
as md_file:
md_file.write(md_mermaid_entity_rela
The Entity Relationship Diagram, again, has its own syntax as
shown in Example 7-18.
Example 7-18 The markdown_mermaid_entity_relationship.j2
Jinja2 template
{% for interface in to_parse_interfaces %}
```mermaid
erDiagram
{{ interface }}
{{ interface }}_IPAddress
{{ interface }}_Status
{{ interface }}_Protocol
{{ interface }}_Method
{{ interface }}_InterfaceIsOk
{{ interface }} ||--o{ {{ interface }}_IPAdd
{{ interface }} ||--o{ {{ interface }}_Statu
{{ interface }} ||--o{ {{ interface }}_Proto
{{ interface }} ||--o{ {{ interface }}_Metho
{{ interface }} ||--o{ {{ interface }}_Inter
```
{% endfor %}
Figure 7-33 shows the rendered Mermaid Markdown Entity
Relationship diragram
Figure 7-33 show ip interface brief as Mermaid Entity
Relationship Diagram
Markdown: Mermaid Mind Maps
Similar to markmap, Mermaid also supports its own
interpretation of mind maps. It is not interactive, nor can it be
exported to HTML, but it does render natively in GitHub and is
a handy lightweight version that does not require the markmap
extension. Example 7-19 shows how to write the new pyATS
testcase while Example 7-20 shows the Jinja2 template format
for Mermaid Mind Maps.
Example 7-19 Transform JSON Mermaid mind map
aetest.loop.mark(Parse_And_Save_Show_IP_Interfac
device_name=testbed.devices)
# Testcases – Markdown Mermaid Mind Map File
class
Parse_And_Save_Show_IP_Interface_Brief_to_MD_Mer
"""Capture Show IP Interface Brief and trans
documentation"""
@aetest.test
def save_markdown_mermaid_mind_map_file(self
#Set current device in loop to self.devi
self.device = testbed.devices[device_nam
#Parse show ip interface brief to JSON
self.parsed_show_ip_interface_brief = se
interface brief")
# Load the Jinja2 template
md_mermaid_mind_map_show_ip_interface_br
self.device.api.load_jinja_template(path="", fil
to_parse_interfaces=self.parsed_show_ip_interfac
#Save Markdown Table to file
with open("Show IP Interface Brief Merma
md_file.write(md_mermaid_mind_map_sh
Example 7-20 The markdown_mermaid_mindmap.j2 Jinja2
template
```mermaid
mindmap
root((Network Interfaces))
{% for interface in to_parse_interfaces %}
{{ interface }}
IP Address
{{ to_parse_interfaces[interface].ip_add
Status
{{ to_parse_interfaces[interface].status
Protocol
{{ to_parse_interfaces[interface].protoc
Method
{{ to_parse_interfaces[interface].method
Interface is OK
{{ to_parse_interfaces[interface].interf
{% endfor %}
```
Figure 7-34 shows the rendered Mermaid Mind Map diagram
Figure 7-34 show ip interface brief as Mermaid Mind Map
As demonstrated, Mermaid adds an entirely new aspect,
visualization, to markdown enabling powerful business ready
visualizations. For a more interactive, tabular experience, basic
Hyper Text Markdown Language, HTML, can be used to create
tables. Once we have an HTML table we can use a free, open-
source, set of tools from https://www.datatables.net, to
transform the basic table into an interactive experience.
HTML
HTML is the standard language used to create and design web
pages. One of the fundamental elements in HTML is the table,
which allows web developers to organize and display data in
rows and columns. To create a table in HTML, specific tags such
as <table>, <tr>, <td>, and <th> are used. The <table> tag
initiates the table structure, <tr> defines a row, <td> represents
a data cell, and <th> is used for table headers. By nesting these
tags appropriately, developers can structure data in a tabular
format, making it easier for users to read and understand as
demonstrated in Example 7-21.
Example 7-21 Transform JSON to HTML
aetest.loop.mark(Parse_And_Save_Show_IP_Interfac
device_name=testbed.devices)
# Testcases – Markdown Mermaid HTML File
class Parse_And_Save_Show_IP_Interface_Brief_to_
_ _ _ _ _ _ _ _
"""Capture Show IP Interface Brief and trans
@aetest.test
def save_html_file(self,testbed,device_name)
#Set current device in loop to self.devi
self.device = testbed.devices[device_nam
#Parse show ip interface brief to JSON
self.parsed_show_ip_interface_brief = se
interface brief")
# Load the Jinja2 template
html_show_ip_interface_brief =
self.device.api.load_jinja_template(path="", fil
to_parse_interfaces=self.parsed_show_ip_interfac
#Save Markdown Table to file
with open("Show IP Interface Brief.html"
md_file.write(html_mermaid_mind_map_
HTML is very simple and straight forward syntax to make a
simple table from the JSON. You can even use the CSV template
as a base template and replace the commas with the
appropriate opening and closing HTML tags as demonstrated in
Example 7-22.
Example 7-22 The html.j2 Jinja2 template
<h1>Show IP Interface Brief</h1>
<table>
<thead>
<tr>
<th>Interface</th>
<th>IP Address</th>
<th>Status</th>
<th>Protocol</th>
<th>Method</th>
<th>Interface is OK</th>
</tr>
</thead>
<tbody>
{%- for interface in to_parse_interfaces %}
<tr>
<td>{{ interface }}</td>
<td>{{ to_parse_interfaces[interface].ip_add
<td>{{ to_parse_interfaces[interface].status
<td>{{ to_parse_interfaces[interface].protoc
<td>{{ to_parse_interfaces[interface].method
<td>{{ to_parse_interfaces[interface].interf
</tr>
{%- endfor %}
</tbody>
</table>
Figure 7-35 shows a basic HTML page rendered by Jinja2.
Figure 7-35 show ip interface brief as basic HTML table
This simple table can be enhanced in a few ways. First, we can
add some simple logic and change the colors of cells to red or
green based on their state as demonstrated in Example 7-23.
Example 7-23 Enhancing the HTML with Jinja2 logic
{% if to_parse_interfaces[interface].status == "
<td style="color: green;">{{ to_parse_interf
{% else %}
<td style="color: red;">{{ to_parse_interfac
{% endif %}
<td>{{ to_parse_interfaces[interface].protoc
_p _ p
<td>{{ to_parse_interfaces[interface].method
{% if to_parse_interfaces[interface].status == "
<td style="color: green;">{{ to_parse_interf
}}</td>
{% else %}
<td style="color: red;">{{ to_parse_interfac
}}</td>
{% endif %}
Figure 7-36 shows how we can enhance, with HTML tags, the
appearance of the data. Such as using colors like red or green to
indicate the health of interfaces.
Figure 7-36 show ip interface brief as HTML table with logic
and color
Datatables
Datatables.net is a highly flexible and feature-rich jQuery
plugin designed to enhance the functionality of standard HTML
tables. By integrating Datatables.net with your web application,
you can effortlessly transform a basic HTML table into a
dynamic table with advanced features such as pagination,
sorting, searching, and more. One of the standout features of
Datatables.net is its ability to automatically detect table headers
and footers. The plugin uses the headers and footers to generate
controls like sorting arrows and search fields. The visual
appearance and interactivity are achieved through a
combination of CSS and JavaScript provided by the
Datatables.net library. This means that developers don’t have to
write extensive code to get a professional and functional table;
instead, they can rely on the power of Datatables.net to handle
the heavy lifting.
Create a new Jinja2 template called datatable_headers.j2 (see
Example 7-24) and another called datatable_footers.j2 (see
Example 7-25). We will include these templates inside our base
html.j2 file. The headers file will contain links to Cascading
Style Sheets (CSS) and JavaScripts (JS) while the footer will
contain an actual JavaScript. The only other modification to our
HTML template is providing the table an ID.
Example 7-24 datatable_header.j2 example
<html>
<head>
<script
src="https://ajax.googleapis.com/ajax/libs/jquer
<script
src="https://cdn.datatables.net/1.11.4/js/jquery
<script
src="https://cdn.datatables.net/buttons/2.0.0/js
ipt>
<script
src="https://cdnjs.cloudflare.com/ajax/libs/jszi
<script
src="https://cdnjs.cloudflare.com/ajax/libs/pdfm
pt>
<script
src="https://cdnjs.cloudflare.com/ajax/libs/pdfm
>
<script
src="https://cdn.datatables.net/buttons/2.0.0/js
<script
src="https://cdn.datatables.net/buttons/2.0.0/js
<script
src="https://cdn.datatables.net/colreorder/1.5.4
></script>
<script
src="https://cdn.datatables.net/buttons/2.0.0/js
<script
p
src="https://cdn.datatables.net/keytable/2.6.4/j
cript>
<script
src="https://cdn.datatables.net/select/1.3.3/js/
t>
<script
src="https://cdn.datatables.net/fixedheader/3.1.
s"></script>
<link rel="stylesheet"
href="https://cdn.datatables.net/fixedheader/3.1
.css">
<link rel="stylesheet"
href="https://cdn.datatables.net/select/1.3.3/cs
<link rel="stylesheet"
href="https://cdn.datatables.net/keytable/2.6.4/
<link rel="stylesheet"
href="https://cdn.datatables.net/1.11.4/css/jque
</head>
Example 7-25 datatable_footer.j2 example
<script type = "text/javascript">
$(document).ready(function(){
$('#datatable thead tr')
.clone(true)
.addClass('filters')
.appendTo('#datatable thead');
var table = $('#datatable').DataTable({
keys: true,
dom: 'Bfrtip',
lengthMenu: [
[ 10, 25, 50, -1 ],
[ '10 rows', '25 rows', '50 rows', '
],
buttons: [
'pageLength','colvis','copy', 'csv', 'ex
],
colReorder: true,
select: true,
orderCellsTop: true,
fixedHeader: true,
initComplete: function () {
var api = this.api();
// For each column
api
.columns()
.eq(0)
.each(function (colIdx) {
// Set the header cell to contai
var cell = $('.filters th').eq(
$(api.column(colIdx).header(
);
var title = $(cell).text();
$(cell).html('<input type="text"
/>');
// On every keypress in this inp
$(
'input',
$('.filters th').eq($(api.co
)
.off('keyup change')
.on('keyup change', function
e.stopPropagation();
// Get the search value
$(this).attr('title', $(
var regexr = '({search})
//$(this).parents('th').find('select').val();
var cursorPosition = thi
// Search the column for
api
.column(colIdx)
.search(
this.value != ''
? regexr.rep
this.value + ')))')
: '',
this.value != ''
this.value == ''
)
.draw();
$(this)
.focus()[0]
.setSelectionRange(c
});
});
},
});
});
</script>
</body></html>
Example 7-26 shows the updated Jinaj2 template for the HTML
page that includes headers and footers to create a data table
instead of a basic HTML page.
Example 7-26 Updated html.j2
{%- include 'datatable_header.j2' %}
<h1>Show IP Interface Brief</h1>
<table id="datatable">
<thead>
<tr>
<th>Interface</th>
<th>IP Address</th>
<th>Status</th>
<th>Protocol</th>
<th>Method</th>
<th>Interface is OK</th>
</tr>
</thead>
<tbody>
{%- for interface in to_parse_interfaces %}
<tr>
<td>{{ interface }}</td>
<td>{{ to_parse_interfaces[interface].ip_add
{% if to_parse_interfaces[interface].status == "
<td style="color: green;">{{ to_parse_interf
{% else %}
<td style="color: red;">{{ to_parse_interfac
{% endif %}
<td>{{ to_parse_interfaces[interface].protoc
<td>{{ to_parse_interfaces[interface].method
{% if to_parse_interfaces[interface].status == "
<td style="color: green;">{{ to_parse_interf
}}</td>
{% else %}
<td style="color: red;">{{ to_parse_interfac
}}</td>
{% endif %}
</tr>
{%- endfor %}
</tbody>
</table>
{%- include 'datatable_footer.j2' %}
Figure 7-37 shows the enhanced data table HTML page
complete with search, sort, filter, and a variety of other
capabilities like printing and exporting to other file types.
Figure 7-37 show ip interface brief as HTML datatable
Now we have pagination, sort, search, filter, print, and many
more options like re-ordering columns by simply including the
datatable header and footer code with our basic HTML table.
Summary
In the realm of network automation, pyATS stands out as a
robust tool that empowers network engineers to test networks
and create automated documentation. This chapter delves into
the intricacies of using pyATS to generate network
documentation in various formats, including JSON, YAML, CSV,
Markdown tables, Mermaid diagrams, and HTML integrated
with Datatables. At its core, pyATS facilitates the extraction of
network data, which can then be transformed and rendered
into structured formats. The use of Jinja2 templates further
enhances this process, allowing for the creation of customized
CSV files, Markdown tables, and other formats tailored to
specific needs. Particularly noteworthy is the integration with
Mermaid, a popular tool for generating diagrams, and HTML
combined with Datatables, which transforms basic tables into
dynamic, interactive ones. The power of pyATS lies not just in
its versatility but also in its ease of use and safety. For those
embarking on their network automation journey, pyATS serves
as an excellent starting point. Its intuitive nature ensures that
even those new to automation can harness its capabilities
without a steep learning curve. Moreover, it’s running in “read-
only” mode ensure that network operations remain
uninterrupted and secure. For enterprises, the value
proposition of pyATS is undeniable. In an era where network
complexities are ever-increasing, having fully automated and
up-to-date documentation is not just a luxury but a necessity.
pyATS provides this, ensuring that network configurations,
topologies, and other critical data are always at one’s fingertips.
This not only aids in troubleshooting and network optimization
but also in compliance and auditing processes. In conclusion,
pyATS is more than just a tool; it’s a transformative solution for
modern network management. Its combination of power, ease,
and ability to safely gather network state makes it an
invaluable asset for any enterprise aiming for efficient and
automated network operations.
Chapter 8. Automated Network Testing
In this chapter, we delve into the realm of automated network
testing, leveraging the robust capabilities of Cisco’s pyATS
framework in conjunction with the Test-Driven Development
(TDD) methodology. The interaction of pyATS and TDD paves the
way for a meticulous testing paradigm, enabling not only safe,
read-only testing but also actionable testing that encompasses
configuration management in the face of failed tests. Through a
pragmatic lens, we will explore real-world use cases illustrating
how these intertwined methodologies foster a resilient, self-
healing network infrastructure. By engendering a proactive
testing culture, we aim to significantly mitigate network
vulnerabilities and ensure a higher standard of network
reliability and performance. This chapter is set to equip you
with the knowledge and practical insights to navigate the
complex landscape of automated network testing and
configuration management, showcasing the profound impact of
a well-orchestrated testing strategy on network robustness and
operational excellence. We can connect to our devices using
traditional SSH or use modern interfaces such as RESTCONF to
gather the network state. This connection approach is
important in determining if you will be using pyATS parsers or
RESTCONF YANG endpoints.
This chapter covers the following topics:
An approach to network testing
Software version testing
Interface testing
Neighbor testing
Reachability testing
Intent-validation testing
Feature testing
An Approach to Network Testing
Heading into the domain of network testing is akin to
navigating a labyrinth, with a myriad of pathways unfolding
with every step. A well-thought-out approach is our compass in
this scenario, guiding us through the intricacies and ensuring
that we emerge triumphant on the other side. Embracing the
principles of Test-Driven Development (TDD) and adapting its
iterative rhythm to the network’s beat forms the crux of our
strategy. It’s like having a friendly debate with the network—
proposing a point (our test), seeing how the network responds,
and then tweaking either the network or our stance to reach a
consensus. Our primary metric for assessing the network’s
condition is a specific benchmark parameter. Alongside this
benchmark, I’ve initialized a descriptively named variable. If
this variable is set to true by the conclusion of our evaluation, it
indicates that the network did not meet our expectations and
the test is deemed unsuccessful.
It’s pivotal to highlight how networks and the Test-Driven
Development (TDD) approach are integral to the Software
Development Life Cycle (SDLC). In the realm of network testing,
TDD is not merely a methodology, it embodies the essence of
software testing, a core part of the SDLC. By following a test-
driven approach, we preemptively address potential defects
and ensure that each phase of the lifecycle meets the prescribed
quality standards through continuous validation. This proactive
stance not only enhances the reliability and efficiency of the
network but also ensures a seamless integration of network
functionalities within the broader software development
process.
Additionally, I like to jazz up my pyATS logs using the Rich
library. Rich is a Python library for rich text beautifying console
and HTML outputs, making logs a visual delight rather than a
chore to sift through. With Rich, my pyATS logs transform into a
colorful, easy-to-decipher narrative of the test journey, where
red and green indicators instantly tell me if a test has passed or
failed. It’s like having a traffic light system for my test results,
making it super intuitive to interpret the outcomes, and
elevating my pyATS test jobs to a production-grade finesse. This
marriage of TDD, pyATS, and Rich not only ensures a robust
testing framework but also makes the process visually engaging
and professional, bridging the gap between meticulous testing
and user-friendly reporting.
In the ideal world, we present our argument, the network
rebuffs, we fine-tune the network’s stance, and voila, we are in
agreement. But sometimes, it’s our argument that needs a slight
rephrasing, maybe changing a stringent equality check to a
more flexible greater-than or less-than comparison. This
iterative dance of adjustments is what fine-tunes our network,
leading to a harmonious dialogue that ensures robust
performance. Here’s a glimpse into some general good practices
that form the bedrock of our approach to network testing:
Keep the unit tests small.
Keep the structure as setup, execute, cleanup.
Always test a well-known state
Limit, or eliminate, dependencies between tests.
Complex is fine, complicated is not.
Avoid “all-knowing” tests.
Setup a threshold and test against that threshold.
Evaluat