Automation With PowerShell
Automation With PowerShell
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools
and many iterations to get reader feedback, pivot until you have the right book and build
traction once you do.
Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Alain Tanguy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Allen Chin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Amy Zanatta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Bill Kindle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
C.J. Zuk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Chad Miars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Christian Coventry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Felipe Binotto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Greg Onstot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
James Petty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Joe Houghes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
John Hermes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Jordan Borean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Kevin Laux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Kieran Jacobsen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Kirill Nikolaev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Martha Clancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Matt Corr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Michael B. Smith . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Michael Lotter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Michael Zanatta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Nicholas Bissell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Rob Derickson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Steven Judd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Wes Stahler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Disclaimer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
About OnRamp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
A Note on Code Listings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
CONTENTS
I Collaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1. Introduction to Git . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1 Understanding Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Creating a Local Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Cloning an Existing Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Understanding the Flow of Working in Git . . . . . . . . . . . . . . . . . . . . . 6
1.5 Your First Commit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Creating a Branch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.7 Merging Branches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.7.1 Merge Commits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.7.2 Merge Conflicts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.8 Stashing Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.9 Rolling Back When Things Go Wrong . . . . . . . . . . . . . . . . . . . . . . . . 25
1.9.1 Hard Reset in Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.10 Connecting to a Remote Repository . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.11 Starting Over When Things Really Go Wrong . . . . . . . . . . . . . . . . . . . 34
1.11.1 Starting From Scratch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.12 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.13 Modern IT Automation With PowerShell Extras . . . . . . . . . . . . . . . . . . 36
1.14 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2. Code Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.1 Purpose of Code Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.2 How to Start with Code Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.2.1 Define Code Conventions for Your Team or Project . . . . . . . . . 39
2.2.2 Define the Code Review Process for Your Team or Project . . . . . 40
2.3 Things to Consider When Performing a Code Review . . . . . . . . . . . . . . . 41
2.4 Code Review Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4.1 Keep Your Changes Small . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4.2 Provide Constructive Feedback . . . . . . . . . . . . . . . . . . . . . . 43
2.4.3 Balance Nit-Picks with Major Comments . . . . . . . . . . . . . . . . 44
2.4.4 Create Pull Request Templates . . . . . . . . . . . . . . . . . . . . . . 45
2.4.5 When to Approve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.4.6 Talk to Each Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.4.7 Use Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5 Tools to Help with Code Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.1 PSScriptAnalyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.2 PowerShell Extension for Visual Studio Code . . . . . . . . . . . . . 51
2.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
II PowerShell Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3. The AAA Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.1 Arrange, Act, and Assert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.1.1 Arrange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
CONTENTS
3.1.2 Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.1.3 Assert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.1.4 Benefits of the AAA Approach . . . . . . . . . . . . . . . . . . . . . . 56
3.2 Pester 5.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.2.1 Pester Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.3 The Star Wars API Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.3.1 So How Does It Work? . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.3.2 Example Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.3.3 Example Code Output . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.4 Pester Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.4.1 Simple Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.4.2 Pester Verbosity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.4.3 Simple Test Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.4.4 Mocked Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4.5 Mocked Test Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.4.6 Complex Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.4.7 Complex Test Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4. Mocking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.1 Mocking and Mock Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.1.1 Stubs, Fakes, and Mocks . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.2 Mocking in Pester with Mock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.3 Mock Testing and Verifiable Mocks . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.3.1 Should -Invoke . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.3.2 Should -InvokeVerifiable . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.3.3 Running the Mock Assertion Tests . . . . . . . . . . . . . . . . . . . . 78
4.4 Mock Scoping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.5 Mocking in the Module Scope with -ModuleName . . . . . . . . . . . . . . . . . 83
4.5.1 Mock Testing in the Module Scope . . . . . . . . . . . . . . . . . . . . 86
4.5.2 Running the Module Scope Tests . . . . . . . . . . . . . . . . . . . . . 86
4.6 Dynamic Mock Behavior with -ParameterFilter . . . . . . . . . . . . . . . . . . 87
4.6.1 Filtered Mock Assertions . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.6.2 Running the Filtered Mock Tests . . . . . . . . . . . . . . . . . . . . . 92
4.6.3 Restricting Mock Calls Further with -ExclusiveFilter . . . . . . . . . 93
4.7 Calling Real Dependencies While They’re Mocked . . . . . . . . . . . . . . . . . 94
4.8 Removing Parameter Typecasting and Validation . . . . . . . . . . . . . . . . . 96
4.9 Mocking Native Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.10 Mocking .NET Objects with New-MockObject . . . . . . . . . . . . . . . . . . . 101
4.11 Next Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.12 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
9. Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
9.1 Why Log? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
9.2 What Makes for Good Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
9.3 What Should Never Be Logged . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
9.4 Logging Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
9.5 Enable System-Level Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
9.5.1 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
9.5.2 Event Log Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
9.6 Linux, macOS, WSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
9.7 Logging for Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
9.7.1 Writing Console Output . . . . . . . . . . . . . . . . . . . . . . . . . . 264
9.8 Persistent Logging Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
9.8.1 PowerShell Transcription . . . . . . . . . . . . . . . . . . . . . . . . . . 265
9.8.2 Logging to Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
9.8.3 Using Tee-Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
9.9 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
9.9.1 Built-in History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
9.9.2 PSReadline History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
9.9.3 Writing to Windows Event Logs . . . . . . . . . . . . . . . . . . . . . 270
9.9.4 Cloud Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
9.9.5 Using Third Party modules for logging . . . . . . . . . . . . . . . . . 273
9.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
9.11 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Afterword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
A, SYMBOLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
B, C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
D, E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
F, G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
H, I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
J, K, L, M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
N. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
O. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
R. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
T. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
U, V, W . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
Y. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
Foreword
By Orin Thomas
Organizations adopt information technology to solve a set of problems. The problems could be
as simple as “how do we keep track of customer orders?” or more complicated ones involving
the analysis of data to determine patterns that might provide some new insight that leads to
a business advantage. Organizations will choose a technology not just because they think it is
fun and cool, but because they can use it to solve a problem that they really need to be solved to
accomplish an organizational objective. The creators of information technologies often have a set
of problems in mind when building those technologies. It might be “how do I simplify the process
of creating and presenting slides at conferences” or “how can we better automate administrative
tasks on Microsoft platforms”. Every system, application, or programming language has a
particular set of tasks it was designed to do very well because the people that created it needed
to scratch an itch that the currently available tools did not adequately address.
Over time creators and developers add and refine features to their products or tools because those
features to allow users of the technology to solve additional or more complex problems that are
considered important. But the key is understanding that to the creator and developer there is
a set of problems that are “in scope” that they envisage their project solving and a lot that are
beyond that scope that it was never intended to address. William Gibson, author of one of the
earliest and most important cyberpunk novels said “The street finds its own uses for things”. One
of the interpretations of this is that users often find uses for products that go way beyond what
the original developers of that product envisaged it being used for. Good products are useful for
things that the original creators never imagined. No product can do everything, but with a bit of
creativity, many products can do things that are a complete surprise to other users and even the
original creators of that project.
Technical communities are groups of people that are enthusiastic about specific systems, appli-
cations, technologies, and languages. These communities spend a great amount of time not only
sharing how to be more proficient in doing something that the technology that they are interested
in was designed to do, but also sharing all of the amazing things that the technology does that no
one could have imagined. Jeffrey Snover often remarks how surprised he is at all the ingenious
things people find that they can do with PowerShell that he never conceived of it being used for.
And that’s what this book is about—sharing with the reader a collection of fascinating things
that you can do with PowerShell. Not only things that you already do that you might be able
to do in a much more efficient or elegant way, but a collection of tasks that you can do with
PowerShell that exceed what you and perhaps even Jeffrey Snover conceived the language was
possible of accomplishing.
i
Contributors
This section includes the names and biographies of the authors and editors of this project in
alphabetical order.
Alain Tanguy
Role: Author
Alain has been an avid PowerShell user throughout his IT career, automating and solving many
problems. Now working as an IT Engineer, he is using his knowledge to answer IT Infrastructure
Challenges. He enjoys pizza, pixel art, funky music, and naps. Alain is reachable on Twitter at
@Alain__Tanguy¹ and LinkedIn².
Allen Chin
Role: Linguistic Editor
Allen first learned and made use of PowerShell in 2018 while working as technical support to
maintain and improve existing scripts. A few years later, he is now an application development
analyst and the main contact for applications, reports, and automation.
Amy Zanatta
Role: Cover Artist
Amy is a Video Editor, Motion Graphics Designer and Personal Stylist, working at the Nine
Network Australia. Amy posts regularly on her YouTube Channel StyleWithinGrace³, with
Australian fashion ideas and inspirations. You can follow her on Instagram⁴ and Twitter @Style-
GraceAmy⁵.
¹https://twitter.com/Alain__Tanguy
²https://www.linkedin.com/in/alaintanguy
³https://www.youtube.com/channel/UCCF1px3YSkNKB6F0HtySI9g
⁴https://www.instagram.com/stylewithingrace/
⁵https://www.twitter.com/StyleGraceAmy/
ii
Contributors iii
Bill Kindle
Role: Senior Editor
Husband to a wonderful woman that he doesn’t deserve, and the father of two adorable children.
Bill is a former career Systems Administrator turned Cyber Security Engineer currently working
for Corsica Technologies⁶. Bill was an author for The PowerShell Conference Book Volume
2 and an editor for Volume 3. His role focuses on automation engineering and supporting a
Security Operations Center. Bill has a passion for helping others in IT and occasionally does
presentations for @FortWayneVMUG⁷. You can find some of Bill’s work at AdamTheAutomator⁸
and TechSnips LLC⁹.
C.J. Zuk
Role: Linguistic Editor
C.J. Zuk is a current college student and works as a federal government contractor in the
Washington, D.C. Metropolitan Area. She began to use PowerShell in high school as her preferred
scripting language because of its cross-platform compatibility. C.J. likes to spend her time with
her fiancee, cats, and volunteering at her synagogue. You can reach her at [email protected]
and on LinkedIn¹⁰.
Chad Miars
Role: Linguistic Editor
Chad’s formal training is in mechanical engineering, but his current role is Director of Business
Ops at Ascent Inc.¹¹. He enjoys using PowerShell to connect and automate all parts of the business.
You can find Chad on LinkedIn¹².
Christian Coventry
Role: Linguistic Editor
Christian is a recent graduate, and it was his tertiary studies that introduced him to PowerShell.
He currently works as a Technical Officer with the Queensland Department of Education¹³.
Christian was an editor for The PowerShell Conference Book Volume 3. You can find him on
LinkedIn¹⁴.
⁶https://corsicatech.com
⁷https://twitter.com/FortWayneVMUG
⁸https://adamtheautomator.com
⁹https://techsnips.io
¹⁰https://www.linkedin.com/in/cj-zuk/
¹¹https://ascenthvac.com
¹²https://www.linkedin.com/in/chad-miars-6489a2122/
¹³https://education.qld.gov.au/
¹⁴https://au.linkedin.com/in/christian-coventry-9a0031157
Contributors iv
Felipe Binotto
Role: Author
Felipe is a specialist in Microsoft technologies and has worked in various roles in the IT industry
for the last 13 years. Currently, he works as a Senior Customer Engineer for the Customer
Success Unit at Microsoft Australia. In this role, his focus is on Azure infrastructure, security,
and automation. Felipe has contributed to various PowerShell forums and Microsoft Docs, and
he currently blogs at Azure Gear¹⁵. You can follow him on Twitter at @felipebinotto¹⁶ or on
LinkedIn¹⁷.
Greg Onstot
Role: Author
Husband, Father, Contributor to the PowerShell Conference Book Vol. 2. Greg has been working
in IT Infrastructure and Cybersecurity Engineering for over 20 years. PowerShell is an integral
part of automating that work.
James Petty
Role: DevOps Collective Liaison
James currently serves as the CEO of the DevOps Collective Inc., a nonprofit working in the
technology education space. He helps manage a $1M+ annual budget that includes multiple con-
ferences and PowerShell Saturdays events across the US. The nonprofit focuses on PowerShell,
automation, and DevOps and runs numerous free online resources, including PowerShell.org. He
is also a co-organizer and co-founder of the Chattanooga PowerShell UserGroup (established in
September 2016). James is also a recipient of the Microsoft MVP award in Cloud and Datacenter
Management. He currently lives in the beautiful Chattanooga, Tennessee area with his amazing
wife. James’ passion lies with automation using PowerShell and all things related to Windows
Server. He has almost a decade of experience as an infrastructure admin for a large enterprise,
helping manage thousands of users and machines. He knows a broad range of products, including
patch management, Active Directory, Group Policy, and the Windows Server operating system.
¹⁵https://azuregear.com
¹⁶https://twitter.com/felipebinotto
¹⁷https://www.linkedin.com/in/felipebinotto/
Contributors v
Joe Houghes
Role: Technical Editor
Husband, Father, Community Geek. Joe Houghes is a co-leader of @ATXPowerShell¹⁸ and
@AustinVMUG¹⁹ user groups in Texas and a member of the @vBrownBag²⁰ crew. He is currently
a Solutions Architect for Veeam, focused on automation & integration. Joe spends most of
his time working within VMware environments when he is not active in planning or hosting
community events. You can find Joe on Twitter, @jhoughes²¹, or his blog²².
John Hermes
Role: Linguistic Editor
John is an agile software developer and systems engineer with a career focus on security
and resilience. He frequently develops PowerShell modules supporting datacenter management,
legacy systems, and cloud service integration. He is also an unabashed Unix greybeard who still
enjoys learning new things and rarely updates his social media. John resides with his extremely
patient and loving wife near Dayton, Ohio.
Jordan Borean
Role: Technical Editor
Jordan is a Software Engineer at Red Hat²³, working on the Windows integrations for Ansible.
He originally worked on Java-based programs for a large company but felt the draw to open
source software and has been an avid contributor since. Jordan mostly focuses on Python and
PowerShell-based languages, and he is committed to trying to bridge the Windows and Linux
worlds and make it easier for them to work with each other. Some projects that he works on
are pypsrp²⁴, smbprotocol²⁵, pypsexec²⁶, and more recently pyspnego²⁷. When finding some free
time, Jordan blogs on Blogging for Logging²⁸ that cover technologies like PowerShell, Ansible,
Windows protocols, and anything else that takes his fancy. You can usually get in contact
with him on the PowerShell Discord server²⁹, or various IRC Freenode channels like #ansible,
#Powershell, #packer-tool, and others.
¹⁸https://twitter.com/ATXPowerShell
¹⁹https://twitter.com/AustinVMUG
²⁰https://twitter.com/vbrownbag
²¹https://twitter.com/jhoughes
²²https://www.fullstackgeek.net/
²³https://www.redhat.com/en
²⁴https://github.com/jborean93/pypsrp
²⁵https://github.com/jborean93/smbprotocol
²⁶https://github.com/jborean93/pypsexec
²⁷https://github.com/jborean93/pyspnego
²⁸https://www.bloggingforlogging.com/
²⁹https://aka.ms/psdiscord
Contributors vi
Kevin Laux
Role: Author and Quality Assurance Editor
Kevin is a manager for an orchestration platform team. He is passionate about PowerShell and
has been leading training classes for his colleagues since the release of PowerShell v3. Kevin also
serves as co-leader of the Research Triangle PowerShell User Group³⁰. In addition to PowerShell,
he is always tinkering with new technology in his home lab and trying to learn everything he
can. You can follow him on Twitter, @rsrychro³¹, and GitHub³².
Kieran Jacobsen
Role: Author and Technical Editor
Kieran Jacobsen (he/him) combines his passion for business process automation, systems inte-
gration, and cybersecurity to help organizations rapidly grow and evolve. Kieran’s involvement
in the technology community has seen him present at Microsoft’s Ignite the Tour, NDC Sydney,
and CrikeyCon. Kieran is well known for his security-focused presentations that blend real-world
examples with storytelling. Microsoft has recognized Kieran’s contributions to the community
by awarding him with the Most Valuable Professional since 2017. Kieran is also a member of
the GitKraken Ambassador Program. Kieran lives in Melbourne, Australia, with his husband
and Burmese cat. In his spare time, Kieran enjoys computer games, Dungeons & Dragons, board
games, and Melbourne’s amazing food culture.
Kirill Nikolaev
Role: Author and Technical Editor
Kirill has more than 15 years of experience in IT, with specialization in Windows Server
infrastructure, virtualization, information security, and automation. He began using PowerShell
almost immediately after its release in 2006 and has been ever since. He is currently Head of the
Windows Administration Team at Fozzy.com³³, an honest hosting provider, where he continues
to give back to the community by sharing automation solutions through their GitHub account³⁴.
You can follow him on Twitter, @exchange12rocks³⁵, or subscribe to his technical blog³⁶.
³⁰https://rtpsug.com
³¹https://twitter.com/rsrychro
³²https://github.com/KevinLaux
³³https://fozzy.com
³⁴https://github.com/FozzyHosting
³⁵https://twitter.com/exchange12rocks
³⁶https://exchange12rocks.org
Contributors vii
Martha Clancy
Role: Linguistic Editor
Martha came to PowerShell and DevOps by way of database administration and is passionate
about using code and automation to help everyone do their jobs more easily. You can find Martha
on Twitter, @marclancy³⁷, or her blog³⁸.
Matt Corr
Role: Author
Husband, father, passionate about automation and process improvement. Matt has over 20
years of IT industry experience and is currently working as a DevOps Solution Specialist for
MOQdigital³⁹. He is very passionate about PowerShell and is the go-to person in his teams for
anything script or automation-related. He has experience with many build and deployment tools,
such as Azure DevOps, Octopus Deploy, TeamCity, Terraform, and PowerShell. You can find
Matt on Twitter (@mattcorr⁴⁰), LinkedIn⁴¹ or his blog⁴².
Michael B. Smith
Role: Quality Assurance Editor
Michael is an IT professional with over 35 years of experience in IT. Michael began using
PowerShell during the Exchange 2007 Server beta and has been deeply into scripting with
PowerShell ever since. He is a 13-time recipient of the Microsoft MVP award in Exchange Server.
He has written many articles about Exchange, Active Directory, PowerShell, Windows Server,
and Azure topics; and is passionate about presenting/training as well. You can find Michael on
Twitter @EssentialExch⁴³, at his blog The Essential Exchange⁴⁴, and in the Facebook group Azure
Support⁴⁵.
³⁷https://twitter.com/marclancy
³⁸https://marthaclancy.com
³⁹https://www.moqdigital.com.au
⁴⁰https://www.twitter.com/mattcorr
⁴¹https://www.linkedin.com/in/mattcorr/
⁴²https://www.intrepidintegration.com
⁴³https://twitter.com/essentialexch
⁴⁴https://www.essential.exchange
⁴⁵https://www.facebook.com/groups/AzureSupport
Contributors viii
Michael Lotter
Role: Author
Michael has nearly a decade of system and network administration experience between South
Africa and the US, from small shops to large enterprises, with a focus on automation and
cybersecurity. He currently works in the finance sector as a systems engineer, where he uses
PowerShell to automate complex processes. In addition to automation through PowerShell,
he enjoys leveraging Azure and Intune to reduce organizations’ on-premises footprint while
maintaining high availability.
Michael Zanatta
Role: Author and Editor-in-Chief
Michael is a Microsoft MVP (Cloud and Datacenter Management), PowerShell SME, Speaker,
Advocate, and Streamer, contracting as a PowerShell Developer for the Australian Federal
Government. Michael has contributed to the PowerShell Conference Book Volume 2 and Volume
3, first as an author and stand-in editor on Volume 2 and then as the Senior Editor on Volume 3.
You can follow him on Twitter, @PowerShellMich1⁴⁶, or LinkedIn⁴⁷. Michael is a co-founder
of the Brisbane Infrastructure DevOps User Group⁴⁸ YouTube channel⁴⁹ and author of his
Livestream on Twitch⁵⁰.
Nicholas Bissell
Role: Author and Senior Editor
Though Nicholas’s formal background is in research chemistry, he has over ten years of
programming experience. In his spare time, he is a freelance software developer and mentor.
Nicholas has previously worked as a sound engineer and a game developer. He is passionate
about automation, open-source software, and the PowerShell community. You can find him on
GitHub⁵¹, StackOverflow⁵², or Reddit⁵³.
⁴⁶https://twitter.com/PowerShellMich1
⁴⁷https://www.linkedin.com/in/michael-zanatta-61670258/
⁴⁸https://www.meetup.com/Brisbane-PowerShell-User-Group
⁴⁹https://www.youtube.com/channel/UCQfLvFYohCCm_gTPEUfaAbw
⁵⁰https://www.twitch.tv/PowerShellMichael
⁵¹https://github.com/TheFreeman193
⁵²https://stackoverflow.com/users/12959131/thefreeman193
⁵³https://www.reddit.com/user/thefreeman193
Contributors ix
Rob Derickson
Role: Quality Assurance Editor
Rob is an IT professional with 19 years in the field. Since 2008, PowerShell and the PowerShell
community have been instrumental in his career success. You can find him on the PowerShell
Discord server⁵⁴ and tweeting nothing on Twitter @RobDerickson⁵⁵.
Steven Judd
Role: Senior Editor
Steven Judd is a 25+ year IT Pro and currently a Windows Systems Engineer at Meta Platforms
Inc.⁵⁶ with an emphasis on Enterprise Messaging and Digital Loss Prevention. He has been
using PowerShell since 2010. He was an author and editor on The PowerShell Conference Book
Volume 3⁵⁷, has co-developed a custom training program for PowerShell, and is an occasional
presenter at PowerShell user groups. He loves to help people learn and recognize the value of
automation. He spends his free time learning more about PowerShell, digital security, and cloud
technologies, along with creating and telling Dad jokes⁵⁸. You can find him hanging out on
the PowerShell Discord Server⁵⁹ bridge channel, taking care of his family, running marathons,
playing the cello, plus a handful of other hobbies he can’t seem to quit. Please follow him on
Twitter, @stevenjudd⁶⁰, read his blog⁶¹, and review, use, and improve his code on GitHub⁶².
Wes Stahler
Role: Technical Editor
Wes Stahler has over 25 years of Information Technology experience as a Developer, Systems
Administrator, and Manager of an Identity and Access Management team. He enjoys evangeliz-
ing PowerShell’s merits and has presented numerous times nationally at the Microsoft Health
Users Group and locally for the Central Ohio PowerShell Users Group. He strives to automate
within Exchange and Active Directory and advocates on the “Power of the Shell.” Available on
Twitter at @stahler⁶³.
⁵⁴https://aka.ms/psdiscord
⁵⁵https://twitter.com/RobDerickson
⁵⁶https://www.linkedin.com/in/stevenjudd/
⁵⁷https://leanpub.com/psconfbook3
⁵⁸https://www.youtube.com/watch?v=BZZM6i8AE1Y
⁵⁹https://aka.ms/psdiscord
⁶⁰https://twitter.com/stevenjudd/
⁶¹https://blog.stevenjudd.com/
⁶²https://github.com/stevenjudd
⁶³https://twitter.com/stahler
Acknowledgements
This book was made possible by a multitude of people, not just the initial team of editors and
the writers, but their family, friends, mentors, peers, and—most of all—you, the readers.
By reading this book, you’re helping to make sure that our field expands and grows, creating
opportunities for folks who otherwise might never see them. That’s incredible and needs no
qualifiers.
This project owes itself to the PowerShell community and everyone who gave it their time,
energy, and money.
x
Disclaimer
All code examples shown in this book have been tested by each chapter author and every effort
has been made to ensure that they’re error-free. However, since every environment is different,
the examples should be run in a non-production environment and should be thoroughly tested
before being used in a production environment. It’s recommended that you use a non-production
or lab environment to thoroughly test code examples used throughout this book.
All data and information provided in this book is for educational purposes only. The editors
make no representations as to the accuracy, completeness, currentness, suitability, or validity
of any information in this book and won’t be liable for any errors, omissions, or delays in this
information or any losses, injuries, or damages arising from its display or use. All information
is provided on an as-is basis.
This disclaimer is provided simply because someone, somewhere will ignore this disclaimer and
if they do experience problems or a “resume generating event,” they have no one to blame but
themselves. Don’t be that person!
xi
Introduction
By Michael Zanatta
Hello Reader!
Modern Automation with PowerShell was an initiative between Steven Judd and myself to create
a textbook as a love-letter for the community by the community. We wanted to provide an
intermediary resource, different in style from the previous PowerShell Conference Books. We
wanted to focus on a deeper understanding of the inner workings of PowerShell, share best
practices and tips, and have this book serve as a study resource and lesson guide. All royalties
for this book will go to the “OnRamp” program (see below).
To our amazing Authors and Editors, and also their Partners and Families, thank you for your
time and sacrifice. This book could not exist without you.
To you, the Reader, I hope you enjoy reading this book as much as we enjoyed writing it.
This is a Leanpub “Agile-published” book. All the work for this book has been completed.
However, as issues are reported, supplementary updates may be released. Leanpub will
send out an email when the book is updated. These revisions will be available at no extra
charge. To provide feedback, use the “Email the Authors” link on the book’s Leanpub
web page⁶⁴. Whether it’s a code error, a typo, or a request for clarification, our editors
will review your feedback, make changes, and re-publish the book. Unlike the traditional
paper publishing process, your feedback can have an immediate effect.
About OnRamp
OnRamp is an entry-level education program focused on PowerShell and Development Opera-
tions. It is a series of presentations that are held at the PowerShell + DevOps Global Summit⁶⁵
and is designed for entry-level technology professionals who have completed foundational
certifications such as CompTIA A+ and Cisco IT Essentials. No prior PowerShell experience is
required. Basic knowledge of server administration is beneficial. OnRamp ticket holders will be
able to network with other Summit attendees who are attending the scheduled Summit sessions
during keynotes, meals, and evening events.
Through fundraising and corporate sponsorships, The DevOps Collective, Inc.⁶⁶ will be offering
several full-ride scholarships to the OnRamp track at the PowerShell + DevOps Global Summit.
All (100%) of the royalties from this book are donated to the OnRamp scholarship
program.
⁶⁴https://leanpub.com/modernautomationwithpowershell
⁶⁵https://powershell.org/summit/
⁶⁶https://devopscollective.org/
xii
Introduction xiii
More information about the OnRamp track⁶⁷ at the PowerShell + DevOps Global Summit and
their scholarship program⁶⁸ can be found on the PowerShell.org⁶⁹ website.
See the DevOps Collective Scholarships cause⁷⁰ on Leanpub.com⁷¹ for more books that support
the OnRamp scholarship program.
Prerequisites
Prior experience with PowerShell is recommended. This book is written for intermediate
audiences with intermediate experience with PowerShell.
Paperback Readers can access digital copies of examples from the book at:
https://github.com/devops-collective-inc/Modern-IT-Automation-with-
PowerShellExtras/tree/main/Edition-01/Examples⁷²
If you’ve read other PowerShell books from LeanPub, you probably have seen some variation on
this code sample disclaimer. The code formatting in this book only allows for about 75 characters
per line before the text will start automatically wrapping. All attempts have been made to keep
the code samples within that limit, although sometimes you may see some awkward formatting
as a result.
For example:
Here, you can see the default action for a line that is too long—it gets word-wrapped, and a
backslash is inserted at the wrap point to let you know it has wrapped. Attempts have been
made to avoid those situations, but they may sometimes be unavoidable. Instead of having a
long command that wraps, splatting⁷³ is used instead.
⁶⁷https://powershell.org/summit/summit-onramp/
⁶⁸https://powershell.org/summit/summit-onramp/onramp-scholarship/
⁶⁹https://powershell.org/
⁷⁰https://leanpub.com/causes/devopscollective
⁷¹https://leanpub.com/
⁷²https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/Examples
⁷³https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_splatting?view=powershell-7
Introduction xiv
$params = @{
ComputerName = $computer
ClassName = 'Win32_LogicalDisk'
Filter = 'DriveType=3'
Property = 'DeviceID', 'Size', 'FreeSpace'
}
Get-CimInstance @params
If you read this book on a Kindle, tablet, or another e-reader, use the PDF manuscript,
not EPUB. EPUB has known formatting issues with Block Types (For Example, Warning,
Tips, Errors), Code Samples, and Annotations.
When writing PowerShell expressions, you shouldn’t be limited by these constraints. All down-
loadable code samples will be in their original form.
Feedback
Have a question, comment, or feedback about this book? Please share it via the Leanpub forum
dedicated to this book. Once you’ve purchased this book, log in to Leanpub, and click on the
“Join the Forum” link in the Feedback section of this book’s webpage⁷⁴.
⁷⁴https://leanpub.com/modernautomationwithpowershell
I Collaboration
“Coming together is a beginning, staying together is progress, and working together is success.”
— Henry Ford
Collaboration within Software Development and DevOps teams is crucial for your team’s success.
In this section, you’ll cover the fundamentals of writing, checking in code using Git, and
performing code reviews for peers. Within Git, an emphasis will be placed on covering critical
aspects of the technology, focusing on terminology, branching structure, creating and merging
branches, and inevitably starting again when things really go wrong. Code reviews will focus
on how to start, things to consider, and most importantly, best practices.
Onwards!
1. Introduction to Git
Git is source code management software that allows many individuals to contribute to a project
at the same time. Every contributor of a Git project has their own local files to which they make
and commit changes. This makes Git a distributed version control system, as opposed to the
server-client model. After edits are finished, they can submit a pull request to have their changes
pulled into the larger project. This chapter introduces you to Git, giving you the tools needed to
contribute to a larger project or even track changes on your own project in a Git repository.
2
Introduction to Git 3
Git Sample
Set a name that’s identifiable and can be used for credit in version history:
¹https://git-scm.com/
Introduction to Git 4
If using GitHub or another service, use the email associated with your account. This allows the
service to identify you and associate your account with each history marker.
Once configured, open PowerShell and create your project directory. These examples use
C:\Repo as the base path. Depending on your system permissions, operating system, or pref-
erence, you may want to use a different directory:
Directory: C:\Repo
Example 4: Setting the working directory to where the Git repository will be
Set-Location -Path C:\Repo\Project\
PS C:\Repo\Project>
Currently, the default branch name of a new repository in Git is master, but this will
soon change to main.² You can use the -b <name> parameter of git init to set the
default branch. You can also control the default name for default branches with the
init.defaultbranch config setting.
You’ll notice that this directory now contains the .git subdirectory.
²Git developers. (2022, Jul. 04). git-init Documentation. Git Documentation. [Online]. Available: https://git-scm.com/docs/git-init.
[Accessed: Jul. 15, 2022].
Introduction to Git 5
Directory: C:\Repo\Project
The .git directory includes files and folders that contain the metadata for your Git repository.
Many of the files in the .git folder are human readable using a text editor, but you should avoid
changing them unless you know what you’re doing.
Example 7: Setting the working directory to the parent that’ll contain the cloned Git repository
Set-Location -Path C:\Repo\
PS C:\Repo>
In the below example, you’ll build the URL string in the variable $Extras, since the repository
URL is long. Use the command git clone followed by the URL path of the repository you
would like to clone. In this example, the path is from the variable $Extras, which points to
the Extras³ repository for this book. By default, the local directory name matches the remote
repository name. To make the local directory name more friendly, add the string ‘MITA-Extras’
as an additional parameter.
³https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras
Introduction to Git 6
This can take some time depending on the project. When done, you’ll end up with a directory
containing all the project files, and the .git subdirectory. As you’ll see below, Git names the
directory MITA-Extras, using the second parameter you pass to git clone.
Directory: C:\Repo
The case-sensitivity of Git branch names can vary between platforms. To avoid confu-
sion, always use unique branch names regardless of capitalization.
Basic Workflow
Introduction to Git 7
In the above example, the main branch keeps the working copy of code meant to be deployed in
production. Changes to the code should be made in the develop branch. When the code is tested
and known to be working, the develop branch can be merged into main using a pull request.
As projects get more complex and more contributors are making changes, the workflow can
become a bit more confusing. These projects can have many branches, each corresponding to
a feature, release, or hotfix. They can also have automated actions that trigger when code is
submitted. They may do things like lint, build, or run tests on the code. It’s this versatility that
makes Git a great tool for projects of all sizes.
One model for managing these branches is called Gitflow. With the Gitflow model, feature
branches are only committed when they’re feature-complete. Gitflow can lead to large commits,
but is a well-known branching model.
An example of Gitflow might look like the following:
Gitflow Workflow
In the above example, the main and develop branches are the only ones that extend the length
of the project. The other branches exist for the duration of their purpose. Once merged, release,
hotfix, and feature branches are removed. Besides the branches mentioned above, there may be
sub-branches that individual contributors use to work on their piece of a feature or other branch.
Another model known as trunk-based development has become more common with the rise
of DevOps and continuous integration/continuous delivery (CI/CD) practices. The trunk-based
model comprises smaller, more frequent updates, making it the preferred model for DevOps/CI
projects.
An example of trunk-based development might look like the following:
Trunk Workflow
In the above example, the main branch or ‘trunk’ exists for the life of the project. Release branches
are created to prepare for production release. Features are checked into the main branch. This
workflow is common in environments where continuous integration is being used.
On branch main
No commits yet
Git’s messages can be extremely helpful. They’ll often help you solve problems when you
encounter them. Follow the suggestion of git status and create files to track:
Example 11: Creating and showing the presence of a text file in your repository
1 Set-Content ./test.txt "This is a test file"
2 Get-ChildItem
Directory: C:\Repo\Project
Example 12: git status tells you when there are files that the repository doesn’t track yet
1 git status
On branch main
No commits yet
Untracked files:
(use "git add <file>..." to include in what will be committed)
test.txt
nothing added to commit but untracked files present (use "git add" to track)
Example 13: git add adds new or changed file to the index
1 git add ./test.txt
Check the status and you’ll see the tracked file under ‘changes to be committed’.
On branch main
No commits yet
Changes to be committed:
(use "git rm --cached <file>..." to unstage)
new file: test.txt
Files and changes in the index are staged for the next commit. You’ll also find the term cached,
which means the same thing. Use git rm --cached <file> to remove files from the index.
git restore --staged <file> removes changes to files in the index, which might include
removing the whole file.
The next step is to use git commit and provide a message using -m parameter followed by your
message in quotation marks. If you don’t provide a message when running git commit, you’ll
be prompted to enter one via the configured text editor. By default, this is a command-line text
editor.
Change Git’s text editor with git config --global core.editor <path>.
The term root-commit means the first commit of a Git repository, and all new commits and
branches build on this. 3bab6a6 is the first seven characters of the hash of the data in the commit.
The first seven or eight characters of a commit’s hash are commonly used to identify the commit
without displaying the whole hash.
Run git status to see if there are any other steps that need to be completed:
Introduction to Git 11
Example 16: git status no longer shows the changes because they’re in the latest commit
git status
On branch main
nothing to commit, working tree clean
The working tree means the actual files and directories in the repository apart from .git.
Example 17: git log shows the commit history backward from the current point
git log
As shown above, the commit hash, author, date, and message can be seen from the previous
commit. Notice the full commit hash includes the seven characters 3bab6a6 you saw in Example
15.
HEAD refers to what Git is looking at in the current repository. Think of it like a pointer to the
latest snapshot of the current branch. HEAD currently points to main, the name of the current
branch. main currently points to the commit you’ve just made.
If you made another commit, main would now point to that new commit. The new commit
would, in turn, point to the last one, forming a chain of history and changes.
The single arrow is a pointer to another object, and the double arrow points to the parent of the
commit. In these diagrams, the rightmost commits are the ‘oldest’ or, more correctly, the furthest
ancestors. Using this logic, the current state of your Git repository should be:
Introduction to Git 12
HEAD and main are references, or refs. 3bab6a6 is a shortened commit hash and represents a
commit object. HEAD is a special ref that usually points to the head (tip) of a branch, such as
main.
Example 18: Making sure there are no uncommitted changes with git status
git status
On branch main
nothing to commit, working tree clean
Display all branches in the repository to check if your branch already exists:
* main
Introduction to Git 13
The asterisk * next to main tells you what branch you’re currently on. If a branch exists on
a remote repository (for example on GitHub) but doesn’t show up locally, try git fetch to
pull those branches from the remote repository. git fetch gets all the history from a remote
repository. If you’ve cloned another repository, your local one already has a remote pointer called
origin, which is the URL of the remote you cloned. If you’ve initialized a new Git repository, it
won’t have any remotes, yet.
In this example, the Git repository is only local and no remote branches exist. The command git
checkout -b creates a new branch and switches to it.
Example 20: Creating and switching to a new branch with git checkout -b
git checkout -b develop
Example 21: The status shows you’re now on a new branch called develop
git status
On branch develop
nothing to commit, working tree clean
Example 22: Changing to a different branch doesn’t affect the working tree
Get-ChildItem
Directory: C:\Repo\Project
In the above commands, a new branch was created called develop. git status shows that the
active branch is develop, and the file system is unchanged.
Take a look at the diagram again. The repository now looks like this:
Introduction to Git 14
Switching to another branch only changes what HEAD points to. Currently, both branches have
the same history (they point to the same commit) since you created develop from main.
For this example, changes can be made to differentiate this branch from main, as if it was a
feature added to the develop branch.
Example 23: Changing the working tree while on the develop branch
Add-Content ./test.txt "This is a test v2.0"
New-Item -Type File -Path ./newassets.lib
Directory: C:\Repo\Project
Inspect the working tree and observe the new file and the change to the size of test.txt.
Directory: C:\Repo\Project
In the example, a new line is added to the test file and a new file called newassets.lib is created.
In order for the changes to be tracked in the branch, the changes need to be added and committed.
Running git status helps you if you don’t know what to do.
Example 26: git status also provides tips about what to do next
git status
On branch develop
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: test.txt
Untracked files:
(use "git add <file>..." to include in what will be committed)
newassets.lib
no changes added to commit (use "git add" and/or "git commit -a")
Unlike before, there are both files not yet tracked, and changes not yet staged. All files in the
directory can be added to the index with git add ., or individually added with a unique file
name. In this example, all new files and changes are included:
Example 27: Staging all changes in the working tree into the index
git add .
git status
On branch develop
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
new file: newassets.lib
modified: test.txt
As the output suggests, the next step is to commit those changes to the current branch:
Introduction to Git 16
Notice the new short hash 2bc6d24 is different because the data has changed. Check the status
once again with git status.
Example 29: The status after the commit shows the changes are now incorporated in the branch
git status
On branch develop
nothing to commit, working tree clean
Two things have happened this time. First, HEAD now points to main again.
This time, though, the branches have different histories. Git changes all files in the working tree
(the actual files in the repository) to match the state saved in the latest commit of main. In this
case, it’s the state at commit 3bab6a6. Prove this by looking at the contents of the working tree.
# Example 31a:
Directory: C:\Repo\Project
# Example 31b:
This is a test file
In the example, newassets.lib no longer exists and test.txt has been reverted to its previous state
where the ‘v2.0’ line wasn’t included. The changes you made on the develop branch are gone,
but not lost. These are safely saved in the commit (2bc6d24) you made on develop.
develop
* main
# Example 33b: Merge the changes from 'develop' into the current branch
git merge develop
# Example 33a:
Switched to branch 'main'
# Example 33b:
Updating 3bab6a6..2bc6d24
Fast-forward
newassets.lib | 0
test.txt | 1 +
2 files changed, 1 insertion(+), 0 deletions(-)
create mode 100644 newassets.lib
Fast-forward in the output is the merge mode that Git used. Since there are no changes on main
and it’s an ancestor of develop, Git only needs to add the additional commit from develop to the
history of main. The branch now looks like this:
Introduction to Git 19
main and develop now have the same histories. Inspect the working tree to prove this.
# Example 34a:
Directory: C:\Repo\Project
# Example 34b:
This is a test file
This is a test v2.0
All the changes from develop have been merged into the main branch. In the above example, you
can see that newassets.lib was created and that test.txt contains the ‘v2.0’ update.
Git can’t fast-forward main, because the 89cc2e5 commit would be lost, or orphaned. If the
two branches have changed different files, or different lines of the same file, they’re still merge-
compatible. Git creates a new commit with changes from both branches. This commit has a new
hash since the data differs from that of either parent commit. Check this commit by looking at
the last log entry.
The merge commit 39d4b2c has two parents, 2bc6d24 from develop and 89cc2e5 from main.
You’ve now reconciled the two histories that had diverged. If develop was a feature branch, it
would now be safe to delete it. All the history from develop is reachable from main, so no commits
would be orphaned by deleting the branch.
When you run git merge under these conditions, Git pauses the merge, allowing you to resolve
the conflicts manually. Git inserts both sets of changes into the conflicting file, with separators
to help you identify where they’re from.
To try this, reset main to the state it was in before the merge, again.
# Example 39a:
[main 6d442df] Added a different second line
1 file changed, 1 insertion(+)
# Example 39b:
This is a test file
This is a different line
This time, though, the two commits 2bc6d24 from develop and 6d442df from main conflict with
each other. Run git merge again and observe that a merge conflict results.
Introduction to Git 23
Example 41: Attempting to merge develop into main with conflicts in test.txt
# Example 40a: Try to merge
git merge develop
# Example 40a:
Auto-merging test.txt
CONFLICT (content): Merge conflict in test.txt
Automatic merge failed; fix conflicts and then commit the result.
# Example 40b:
On branch main
You have unmerged paths.
(fix conflicts and run "git commit")
(use "git merge --abort" to abort the merge)
Changes to be committed:
new file: newassets.lib
Unmerged paths:
(use "git add <file>..." to mark resolution)
both modified: test.txt
# Example 40c:
This is a test file
<<<<<<< HEAD
This is a different line
=======
This is a test v2.0
>>>>>>> develop
Don’t panic! There are a couple of approaches to resolving merge conflicts. The first is to choose
the data you want to keep manually and change the file. Pick the line to keep, and remove the
rest, including the three separators. Alternatively, you can keep all the changes in a file from one
branch by checking it out.
Example 42: Resolving the conflict by keeping the test.txt changes from develop
# Example 41a:
git checkout --theirs ./test.txt
git add ./test.txt
# Example 41b:
git status
Introduction to Git 24
# Example 41a:
Updated 1 path from the index
# Example 41b:
On branch main
All conflicts fixed but you are still merging.
(use "git commit" to conclude merge)
Changes to be committed:
new file: newassets.lib
modified: test.txt
Some important terminology to know is ours and theirs. ours refers to the changes from the
current branch, which is main in this case. theirs refers to the changes from the incoming branch,
which is develop in this case. If you wanted to keep the changes from main, you would run git
checkout with --ours, instead. git add adds the resolved changes to the index so that Git can
complete the merge.
As you can see from the output of git status, there’s only one more step. To complete the
merge and create the merge commit, run git commit.
Git prepares a merge commit message, so you don’t need to use -m 'message' unless you want
to override it. The message opens in your chosen text editor so you can interactively add any
notes about the conflict resolution. Once you close this, Git creates the merge commit, reconciling
all the changes you picked.
Your repository now looks like this:
The merge commit 86490e9 has two parents as usual, but deletes the lines you didn’t keep.
Introduction to Git 25
Another option for resolving merge conflicts is to use a merge tool such as vimdiff,⁴
but this chapter doesn’t cover these.
By default, git stash push only stashes changes that the repository is tracking, such
as files in the index. To stash untracked files too, include the -u switch. The -m
'<message>' parameter sets a custom message for the stash entry.
When you want the changes back, you can run git stash pop to restore them and remove that
temporary commit from the stash list. Alternatively, git stash apply restores the changes but
leaves the stash entry in the list.
The stash list acts like a stack, so each time you run git stash push, Git adds it to the top of
the list. You can, however, address individual stash items using the syntax stash@{n} where n
is the number in the list, starting at zero.
For example, git stash apply 'stash@{1}' restores the contents of the second-to-last stash
entry, but leaves it in the list. To view all your stash entries, use git stash list.
• Working Tree: The actual files and subdirectories in the repository except .git.
• Index: A list of files with uncommitted changes that have been staged using git add.
• Head: A pointer that points to the latest commit or tip of a branch, for example main or
develop.
⁴Git developers. (2022, Jul. 04). vimdiff Documentation. Git Documentation. [Online]. Available: https://git-scm.com/docs/vimdiff.
[Accessed: Jul. 16, 2022].
Introduction to Git 26
git reset changes where a branch pointer is pointing; the below example should help explain.
Imagine a branch called main has a commit history of three commits A, B, and C. The changes
in those commits are represented as (A), (B), and (C).
After running git reset --soft B, main points to B. Running git status shows the changes
(C) as staged, and running git commit creates a new commit with the same changes (C). The
changes (C) show as staged because the index wasn’t reset, and still contains them. A soft reset
simply changes where a branch head is pointing. This can be useful if you want to remove a file
that was erroneously committed or change the commit message.
A Soft Reset
C is now an orphaned commit, meaning no refs or commits point to it. You can still access it
using its hash, but Git will eventually delete it as part of its garbage collection (GC).
Returning to the three commits example, A, B, and C, running git reset --mixed B does two
things. The branch head points to B, and the index matches B. If you run git commit, nothing
would happen as there are no differences between the branch head and the index.
The files in the working tree are unaffected, so the changes (C) still exist in your working
directory, but are not staged. Assuming you add the same files with git add and then commit
with git commit, there’ll now be a new commit with the changes (C). A mixed reset changes
the branch head and index, which is useful for fixing issues in a commit or removing a file that
was erroneously committed.
Introduction to Git 27
A Mixed Reset
Finally, start over again with the three commits example, A, B, and C. Running git reset --
hard B does three things. The branch head points to B, the index matches (B), and all changes
to files in the working directory are reverted to (B). All later changes to files in the working
directory are lost. Run git status to ensure there aren’t changes that you may want to keep
before doing a hard reset. A hard reset changes the branch head, index, and working tree. This
is useful for getting back to a commit with a known good state.
A Hard Reset
Only tracked files in the working tree are reset during a hard reset. Use git add --all
to ensure all files are reset. Alternatively, use git clean -f to delete all untracked files
in the working tree.
commit 6d442df13e3d5639572e3e66924355d467adbea4
Author: John Doe <[email protected]>
Date: Thu Mar 24 14:10:29 2022 -0400
commit 3bab6a6a6886f8b5e5f6a56dd4c01d5812cd2006
Author: John Doe <[email protected]>
Date: Sat Mar 12 15:35:33 2022 -0500
Using git log, you can see all previous commits in the current branch.
This is where good commit messages come in handy. The commit to roll back to is the ‘added
my first file’ commit with hash starting 3bab6a6.
Only the first few unique characters in the hash are needed; enough that Git can be sure
which commit hash you mean.
Introduction to Git 29
Inspect the working tree to confirm that the files in it were also reverted.
Example 46: Inspecting the working tree after the hard reset
# Example 46a: List the files in the working tree
Get-ChildItem
# Example 46a:
Directory: C:\Repo\Project
# Example 46b:
This is a test file
this example, the local Git repository is pushed to a remote repository on GitHub. Once pushed to
GitHub, the repository acts as a backup to your local files and a place where others can contribute.
To start, you’ll need a GitHub account. Most services provided by GitHub are free for public
repositories and many are also free for private ones.⁵ You can create an account⁶ on the GitHub
website to get started. Once logged in, you’ll need to create a new repository. On the homepage⁷,
click the plus + icon in the top-right corner, and then ‘New repository’:
Choose a name for the repository. This should usually be the same or similar to the name of the
directory that contains your local one. The following examples assume you called the repository
test.
Set your project to Private if you don’t want others to see it. Some advanced features are disabled
for private repositories with a free account.
There are normally options such as ‘Add a README file’ or ‘Add .gitignore.’ Ignore these, as they
initialize the repo with files and a default branch. You don’t want this since your local repository
is already initialized.
⁵GitHub. (2022). Pricing: Plans for every developer. GitHub. [Online]. Available: https://github.com/pricing#compare-features.
[Accessed: Jul. 16, 2022].
⁶https://github.com/signup
⁷https://github.com/
Introduction to Git 31
Click ‘Create repository’ to continue. On the next page, there are steps for initializing and pushing
a local repository. Find the steps that say ‘…or push an existing repository from the command
line.’ Using these steps, push your local repository to the remote GitHub one.
In the examples ahead, replace UserName with your GitHub username. If you’ve used a
different repository name than test, replace this too.
Example 47: Adding your new GitHub repository as a remote to your local one and pushing main
git remote add origin https://github.com/UserName/test.git
git branch -M main
git push -u origin main
The example first adds a remote repository and calls it origin. The second command renames
the current branch to main, to ensure a branch with that name exists. This shouldn’t have any
effect if you’re using the same branch names as in the examples.
The final step is to push the main branch to the remote repository origin. The additional switch
-u to git push sets your local branch up to track the status of the same branch in the remote
repo. The term for this is setting the upstream branch.
GitHub now contains the main branch, and your local main branch tracks the remote copy, but
the develop branch is missing. To push the develop branch, run git push with develop or specify
all branches with git push --all origin. Use -u again to make sure the local develop branch
tracks the one on GitHub.
Introduction to Git 32
Information in the output points out that there are changes in develop that aren’t in main; this
is expected.
When it’s time to merge changes from develop into main, the link provided allows you to create
a pull request on GitHub. The pull request is like your local merge, but happens on the remote
repository, using the web interface on GitHub. If you were to create, then merge a pull request
from develop into main, the remote main branch and local main branch wouldn’t be at the same
commit anymore.
Changes made locally need to be pushed to the remote repository, and changes made remotely
need to be pulled to your local repository. The following example shows you how to pull changes
from the remote repository into your local one.
Example 49: Pulling changes to the remote main branch into the local one
git checkout main
git pull
Updating 3bab6a6..4ea0790
Fast-forward
newassets.lib | 0
test.txt | Bin 21 -> 42 bytes
2 files changed, 0 insertions(+), 0 deletions(-)
create mode 100644 newassets.lib
Notice that you didn’t need to pass the remote name origin to git pull. This is because the
branch is set up to track the remote main branch of origin. If it wasn’t, you could use git
pull origin.
Under the hood, git pull runs two commands. The first, git fetch, downloads all the changes
from the remote repository and stores them separately from your local changes. The second is
usually git merge, in order to merge the changes from the remote with your local ones.
Likewise, when changes are made locally, those need to be pushed to the remote repository. In
the following examples, you’ll add changes to the develop branch, stage and commit them, and
push them to the remote repository.
Introduction to Git 33
Example 51: Adding a new line to test.txt and committing the change
Add-Content ./test.txt "This is version 3.0"
git commit -a -m "v3.0 release"
Note the -a switch passed to git commit. This automatically stages all changes to tracked files
before the commit. Now your local branch is one commit ahead of the remote one. Push the
changes to the remote branch.
Once again, you don’t need to specify the remote and branch names since develop is already set
up to track develop on origin. Inspect the test.txt file one more time to observe the three lines.
Example 53: Inspecting the three lines in test.txt after making changes and pushing them
Get-Content ./test.txt
Introduction to Git 34
You can view the remote configuration of your repository by searching the local config.
remote.origin.url https://github.com/UserName/test.git
remote.origin.fetch +refs/heads/*:refs/remotes/origin/*
branch.main.remote origin
branch.develop.remote origin
Example 55: Squashing all the changes since the first commit into a single commit
git checkout main
git reset --soft 3bab6a6
git commit -m 'All changes in one commit'
Example 56: Inspecting the main branch after changing its history
git status
Introduction to Git 35
On branch main
Your branch and 'origin/main' have diverged,
and have 1 and 4 different commits each, respectively.
(use "git pull" to merge the remote branch into yours)
The output shows that the histories are now different. The steps below reset the local branch
back to the last commit on the remote one. First, update your local copy of the remote repo.
Fetching origin
POST git-upload-pack (165 bytes)
From https://github.com/JohnDoe/test
= [up to date] main -> origin/main
= [up to date] develop -> origin/develop
The -v switch means verbose and shows you the fetch result. Now, you can hard-reset the branch
using the ref that points to the remote copy of main.
You’ve now rewritten your local history to match that of the remote branch.
Example 59: Checking the status after the remote hard reset
git status
On branch main
Your branch is up to date with 'origin/main'.
This is very useful if something goes wrong in your local repository. It also shows that you can
use git reset with refs as well as commit hashes.
Introduction to Git 36
1.12 Conclusion
Git is a powerful source control tool. You can use it in simple pipelines with ease or do complex
merges of larger projects. Git is an essential tool for developing with others and is a great way
to keep track of your code with time.
It’s also a tool that allows you to participate in open source projects. It can save you from
headaches when you’re working on complex scripts, programs, documents and more. SCM tools
like Git are essential in the modern day IT environment. Ensure you continue to learn about Git
and leverage the many online resources available all over the internet.
¹¹https://git-scm.com/docs/git#_git_commands
¹²https://training.github.com/
¹³https://github.com/signup
¹⁴https://git-scm.com/download/win
¹⁵https://git-scm.com/download/linux
¹⁶https://git-scm.com/download/mac
¹⁷https://git-scm.com/book/en/v2
¹⁸https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras
2. Code Reviews
Engineers often see code reviews as part of running a formal software development team or
project, or as part of a Software Development Lifecycle (SDLC)—not a process that also applies
to teams or projects focused on IT automation. The sections ahead aim to dispel that thinking
and explore how code reviews can improve the quality of your automation.
The thought of reviewing another’s code and giving feedback can be overwhelming; something
that’s easier said than done. This chapter explores performing a code review, setting up code
reviews within your project or team, best practices used within the community, and tools you
can use to help you review code.
1. Find and reduce bugs and issues within code, preventing issues from occurring in produc-
tion environments.
2. Improve the sharing of knowledge between team members.
3. Fulfill change management and change review processes under ITIL¹, ISO 20000², and other
standards.
4. Increase the sense of mutual responsibility for systems between team members.
5. Find better solutions to problems.
6. Improve the overall quality of the codebase.
38
Code Reviews 39
• File organization.
• Naming of variables, functions, and methods.
• Indentation.
• Comments.
• Declarations.
• Statements.
• Using blank space.
• Capitalization.
• Code structure, and much more.
When working within a team, it’s critical that you reach a consensus on how code is written. This
consensus ensures that all code, no matter who writes it, follows a standard pattern or progres-
sion. This improves the readability and understandability of the code, making contributions and
maintenance easier. Anyone in the team should be able to pick up another’s code, understand it,
review it, and change it.
It’s important that you record the rules and practices you’re following within your project, team,
or organization. They should be discoverable, allowing anyone to read and understand what
conventions they’re to use when writing code.
By developing conventions, you’re making the task of understanding code significantly faster,
and dramatically reducing the time required for those who review code to understand it and
then provide comments and feedback.
When it comes to PowerShell, developers are lucky that they can stand on the shoulders of giants.
The PowerShell Best Practices and Style Guide³ was developed by Don Jones, Matt Penny, Carlos
Perez, Joel Bennett, and other members of the PowerShell Community. These best practices have
been developed over the last decade and have become a quasi-standard within the PowerShell
community.
It’s recommended to fork the repository, then adapt and change the guidelines to meet the
requirements of your project, team, or organization.
Once you’ve defined your code conventions, link to them from your README files or contribu-
tion guides for your projects.
³https://github.com/PoshCode/PowerShellPracticeAndStyle
Code Reviews 40
2.2.2 Define the Code Review Process for Your Team or Project
When defining code review processes, always try to think of the what, who, and when.
All code should be reviewed before it’s merged into main (your main or trunk branch) and
deployed into production. Code shouldn’t be excluded from review based upon the author’s
experience, role in the team or organization, or their seniority.
Branch policies must be configured to prevent changes being directly pushed into sensitive
branches, such as main. Code repositories utilize branch policies to protect essential branches
(e.g main or master) from accidental/intentional changes.⁴ ⁵
Everyone in your team or project must be involved in code reviews. Being involved in code
reviews is a great way to get experience and form a deeper understanding of a codebase. By
involving junior team members, they’re presented with the opportunity to learn from others,
even if more senior members might re-review the code.
A great piece of advice is to ensure that you don’t have a single person performing all the code
reviews for your team. This can lead to a huge bottleneck and be a source of delays and pain for
your team. It also increases the chances of mistakes making it into production. Remember, even
the most senior team members make mistakes.
The common conception is that code reviews should be performed before code is pushed into
production. This thinking is driven by the desire to protect production code and comes from
change management processes like ITIL.
There is, however, an alternative perspective of when reviews should be performed. Many
developers believe in moving processes—like reviews and security checks—as far to the left as
possible, with code reviews performed when merging into a production-like environment.
⁴Microsoft. (2022, Apr. 30). Branch policies and settings. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/azure/
devops/repos/git/branch-policies?view=azure-devops&tabs=browser. [Accessed: Sep. 15, 2022].
⁵GitHub. (2022, Aug. 18). About protected branches. GitHub Docs. [Online]. Available: https://docs.github.com/en/repositories/
configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches. [Accessed:
Sep. 15, 2022].
Code Reviews 41
Consider a situation where you have development, testing, and production environments, with
the testing environment being identical to production. In this situation, you could perform code
reviews when code is pushed into the testing environment. Some might argue that code should
be re-reviewed before being pushed into production. However, if the code hasn’t changed, and
all tests and validations passed in the first review, there’s little value in an additional one. The
exception here is when screening for hidden mistakes (that may only generate errors later) and
edge-cases.
If the purpose of review before production is one of change control, apply rules on who can push
changes to production, or who can approve these changes. You should then put controls within
your deployment/release process (such as control gates).
To reduce the human effort spent on code reviews, it’s recommended that code reviews aren’t
performed until after all build and testing tasks are completed. As the chapter discusses later,
these tasks may include automated checks around code suitability and, as such, should pass
before a human reviews the code.
1. Why is this change being made? Does the change resolve an issue?
2. What’s the context of the change within the bigger picture?
3. Does this change behave how the author intended? Are there potential situations that may
cause the code to behave unexpectedly? Are there any unexpected outcomes to running the
code?
4. Is the code overly complex? Could the issue be solved in a simpler way?
5. Has the change been tested appropriately? Do any tests need to be updated? Have all tests
passed?
6. Are there appropriate comments? Are they clear and useful?
7. Has any comment-based help or related documentation been updated?
8. Does the code follow the team/project conventions?
9. Are there any potential security issues introduced with this change?
10. Are errors handled or thrown appropriately?
A what-if? approach is helpful here. Make educated guesses about any knock-on (downstream)
effects that a change could cause. The more familiar you are with a codebase, the more accurately
you can predict how a change will interact with existing code.
1. Each change or pull request should be for a single bug, issue, or feature.
2. Limit code changes to under 400 lines—under 200 is better still.
So, what’s the driver of these rules? The processing ability of the human brain.
Ensuring a pull request solves a single bug, issue, or feature means a reviewer needs to keep track
of only a single purpose for the code—think “I need to eat an apple” compared to “I need to eat
an apple, tie my shoelaces, and sing at the same time.”
Code Reviews 43
There’s also evidence that human performance in detecting issues and defects decreases markedly
with larger code changes.⁶
Introducing new features to an existing code base, a new function in a PowerShell module, for
instance, can often make it hard to follow these rules. One approach is to break up new features
where you can. For instance, if a new feature requires two new functions, you might break those
up into different pull requests. If one function depends upon the other, then you can work to
get the dependency’s review completed first. You could create a third pull request for changes to
other functions that should interact with these new ones if needed.
Code reviews require significant attention to be spent, and it can be difficult for individuals
to maintain their concentration for longer than 60 minutes. With interruptions and meetings,
spending more than this may not be possible. There’s some evidence that most defects can be
found within 60 to 90 minutes.⁷ If after an hour you’ve found several major defects that need
to be addressed, consider that any other issues still in the code could be found in the next code
review. They may also be resolved as part of resolutions for defects found already.
1. Feedback provided should be constructive. The reason an issue has been raised should be
clear.
2. Where possible, ask open-ended questions as they encourage deeper thought and under-
standing of issues raised. Don’t tell them how to fix an issue, guide them to the correct
solution.
3. Don’t use strong or opinionated statements.
Now consider:
Don’t underestimate the impact that code reviews can have on people’s confidence. Remember
to applaud and praise good solutions to problems. Not all comments need to be for issues, you
can also leave praise.
Several years ago, a junior developer in my team had spent several days working through
a difficult bug fix. Several iterations of the code were needed before the issue could be
solved. I remember how ecstatic they were when the only comment in the review was
“Everything looks good, there are no issues. Great job!!”
Always remember that everyone has a different experience. While some may have in-depth
automation knowledge, others may only be starting on their journey.
However, your team’s style guide notes that values within a hash table should be aligned as:
This issue doesn’t affect the execution of the code; however, the code doesn’t meet the code style
expectations of the team.
As a reviewer, you need to keep a balance with nit-picking comments. No one wants to see 100
comments for 200 lines of code. Some recommendations are:
1. If you see a particular nit-pick repeated throughout the code, leave a single comment. Don’t
add a comment to every instance.
2. There can be value to marking and finding nit-picks. Consider prefixing comments with
[nit-pick] so that requesters and other reviewers can quickly differentiate critical
comments from non-critical nit-picks.
You can reduce nit-picks by using automated validation tools such as PSScriptAnalyzer⁸ and
testing suites like Pester⁹. You can learn more about Pester and testing in the Testing part of the
book. The PowerShell extension within Visual Studio Code¹⁰ also includes many settings that
can aid with sticking to a particular coding style.
If you’re struggling with the number of nit-picks within your team or project, it could be a sign
that the tooling and standards maintained within the team need to be reviewed and discussed.
Pull request templates are typically created at a repository level and written in Markdown. While
templates can be placed in different locations within a repository, a good option is the docs folder,
as this folder is supported by both GitHub and Azure DevOps.¹¹ ¹²
Other common locations include the repository root, or a service-specific folder such as .github
or .azuredevops. Many services also support additional templates by placing files in a folder,
such as .github/PULL_REQUEST_TEMPLATE/ or .azuredevops/pull_request_template/.
Here’s an example of a basic pull request template for PowerShell code:
Completing the checklist provides an opportunity for the requester to stop and think about their
code. Have they performed the necessary tests and checks? Are the tests all passing? Have they
followed the defined style guide?
Pull request templates aren’t only for the requester—they can be used by the reviewers as well.
Reviewers may have several steps to be completed as part of the review; these could be included
in the template and then updated or marked as completed by the reviewer.
If you’re the change requester, don’t be afraid to ask questions about the feedback that’s been
provided to you. Don’t think of reviews as a blocker to production; they’re an opportunity for
you to learn and improve your craft.
Don’t rely only on text communication on your favorite source code platform. There’s value
in having face-to-face discussions, be they in-person or virtual. When a requester is new to a
team or project, or there are a substantial number of questions and issues, try to provide—as
a reviewer—the opportunity for dedicated time with the requester. This provides them with an
opportunity to ask questions and more opportunities to share knowledge.
If you lead or manage a team, be vigilant that code reviews don’t become a back-and-forth
slugging match. This isn’t productive and doesn’t help anyone.
2.5.1 PSScriptAnalyzer
PSScriptAnalyzer is a powerful module that can help you write better PowerShell code. It’s a
static code checker for PowerShell modules and scripts, which checks the quality of the code
specified against a set of rules. These rules are based on best practices identified by the PowerShell
Team and the community. Rules include checks for uninitialized variables, usage of sensitive
types, the use of Invoke-Expression, and many more. PSScriptAnalyzer can also check for
compatibility issues between PowerShell versions, and it also has some linting support.¹⁵ If you
aren’t using PSScriptAnalyzer, you have really been missing out.
When you execute PSScriptAnalyzer using Invoke-ScriptAnalyzer, it’ll check your specified
PowerShell code using a default collection of built-in rules.
¹³https://pester.dev/
¹⁴https://learn.microsoft.com/en-us/powershell/utility-modules/psscriptanalyzer/overview
¹⁵Microsoft. (2022, Mar. 23). PSScriptAnalyzer module. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/
powershell/utility-modules/psscriptanalyzer/overview. [Accessed: Aug. 12, 2022].
Code Reviews 48
PSScriptAnalyzer includes some pre-defined rule sets, which you can specify using the -
Settings parameter.
There are often situations where you’ll want to change the behavior of PSScriptAnalyzer
and Invoke-ScriptAnalyzer. While you can specify and control the execution of Invoke-
¹⁶Microsoft. (2021). PSScriptAnalyzer - Engine/Settings files. PowerShell/PSScriptAnalyzer on GitHub. [Online]. Available:
https://github.com/PowerShell/PSScriptAnalyzer/tree/5797a04a61228eb3a64287d56413a035d25191d5/Engine/Settings/. [Accessed: Aug.
12, 2022].
¹⁷Microsoft. (2021, Jan. 06). PSScriptAnalyzer - CodeFormatting.psd1. PowerShell/PSScriptAnalyzer on GitHub. [Online]. Available:
https://github.com/PowerShell/PSScriptAnalyzer/blob/master/Engine/Settings/CodeFormatting.psd1. [Accessed: Aug. 12, 2022].
¹⁸Microsoft. (2021, Jan. 06). PSScriptAnalyzer - CodeFormattingStroustrup.psd1. PowerShell/PSScriptAnalyzer on GitHub. [On-
line]. Available: https://github.com/PowerShell/PSScriptAnalyzer/blob/master/Engine/Settings/CodeFormattingStroustrup.psd1. [Ac-
cessed: Aug. 12, 2022].
¹⁹Microsoft. (2022, Mar. 23). Using PSScriptAnalyzer. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/
powershell/utility-modules/psscriptanalyzer/using-scriptanalyzer. [Accessed: Aug. 12, 2022].
Code Reviews 49
ScriptAnalyzer with parameters, you can also define these within a .psd1 settings file. This
is better for long-term maintainability in your code review process. You then call Invoke-
ScriptAnalyzer, specifying the path to the file using the -Settings parameter.
You can specify which rules to include using the IncludeRules element.
IncludeRules and ExcludeRules both support the use of wildcards. In the example below, only
rules starting with PSDSC (the included DSC rules) are included.
Every PSScriptAnalyzer rule has an associated severity, and you can specify which rules are
included based on their severity.²⁰ The example below includes only rules whose alert has a
severity of error or warning.
²⁰Microsoft. (2022, Mar. 23). PSScriptAnalyzer rules and recommendations. Microsoft Docs. [Online]. Available: https://
learn.microsoft.com/en-us/powershell/utility-modules/psscriptanalyzer/rules-recommendations. [Accessed: Aug. 12, 2022].
Code Reviews 50
You can also write your own rules for PSScriptAnalyzer as PowerShell modules, and include
these in your PSScriptAnalyzer settings.²¹ The CustomRulePath element provides a mechanism
to specify the location of PowerShell modules containing your custom rules.
If you want to include the default rules together with your custom rules, you can set the
IncludeDefaultRules element to $true.
Some rules have definable behavior, allowing you to change what they’ll look for based on your
own requirements. PSUseConsistentIndentation is an example of one such rule. This rule
allows you to specify what kind of indentation is used (space or tab), the indentation size in
the number of space characters, and how indentation applies to pipelines.
You can define these rules through settings using the Rules element:
²¹Microsoft. (2022, Mar. 23). Creating custom rules. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/powershell/
utility-modules/psscriptanalyzer/create-custom-rule. [Accessed: Aug. 12, 2022].
Code Reviews 51
²²https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell
²³Microsoft. (2022, Aug. 04). User and Workspace Settings. Visual Studio Code Docs. [Online]. Available: https://code.visualstudio
.com/docs/getstarted/settings. [Accessed: Aug. 12, 2022].
Code Reviews 52
64 // Default: False
65 "powershell.codeFormatting.useConstantStrings": true,
66
67 // Use correct casing for cmdlets.
68 // Default: False
69 "powershell.codeFormatting.useCorrectCasing": true,
70
71 // Adds a space after a separator (',' and ';').
72 // Default: True
73 "powershell.codeFormatting.whitespaceAfterSeparator": true,
74
75 // Adds spaces before and after an operator ('=', '+', '-', etc.).
76 // Default: True
77 "powershell.codeFormatting.whitespaceAroundOperator": true,
78
79 // Adds a space between a keyword and its associated opening brace.
80 // Default: True
81 "powershell.codeFormatting.whitespaceBeforeOpenBrace": true,
82
83 // Adds a space between a keyword (if, elseif, while, switch, etc.)
84 // and its associated conditional expression.
85 // Default: True
86 "powershell.codeFormatting.whitespaceBeforeOpenParen": true,
87
88 // Removes redundant spaces between parameters.
89 // Default: False
90 "powershell.codeFormatting.whitespaceBetweenParameters": true,
91
92 // Adds a space after an opening brace ('{')
93 // and before a closing brace ('}').
94 // Default: True
95 "powershell.codeFormatting.whitespaceInsideBrace": true,
96
97 // Enables real-time script analysis from PowerShell Script Analyzer.
98 // Default: True
99 "powershell.scriptAnalysis.enable": true
100 }
Settings elements that begin with powershell. are for the PowerShell extension
specifically.
In the example, you can see that the PowerShell extension is recommended and a specific brace
and indentation style has been specified, as have other code format options.
You can change many of these settings, including extensions settings, in the Visual Studio Code
interface. In the toolbar, navigate to File → Preferences → Settings or use the keyboard
shortcut Ctrl + , (comma).
The user-level settings.json file can be found in the following locations, depending on your
platform:
• Windows: %APPDATA%\Code\User\settings.json.
• Linux: $HOME/.config/Code/User/settings.json.
• macOS: $HOME/Library/Application Support/Code/User/settings.json.
²⁴https://github.com/PoshCode/PowerShellPracticeAndStyle
²⁵https://learn.microsoft.com/en-us/powershell/utility-modules/psscriptanalyzer/overview
²⁶https://www.powershellgallery.com/packages/PSScriptAnalyzer/
²⁷https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/creating-a-pull-request-
template-for-your-repository
²⁸https://learn.microsoft.com/en-us/azure/devops/repos/git/pull-request-templates
²⁹https://pester.dev/
³⁰https://code.visualstudio.com/
³¹https://learn.microsoft.com/en-us/powershell/scripting/dev-cross-plat/vscode/using-vscode
³²https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell
³³https://www.itil.org.uk/what-is-itil
³⁴https://www.iso.org/standard/70636.html
II PowerShell Testing
3.1.1 Arrange
Arrange is where the test conditions are configured and set up. Any test variables or required
mocked systems can be created here if needed.
3.1.2 Act
Act is the process of invoking the actual test itself. This would call the PowerShell function being
tested directly and store the result (if there is one) in a variable.
3.1.3 Assert
Assert is where we check the test results or another condition to determine if it’s a pass or a fail.
If a fail is detected, then an error should be logged with as much information as possible.
56
The AAA Approach 57
If you are new to Pester, version 5.0 is an ideal version to start with, as the project has had ample
time to mature and the process and documentation are of a high standard.
If you have previously used earlier versions of Pester, it’s recommended to review the 5.0
documentation for the list of changes to see what’s required³ to bring your test projects up to
date. For this chapter, we’re using Pester version 5.3.0, which is current at the time of writing.
This command is optional but generally recommended. Trusting the official PowerShell
Gallery⁴ for your module installations will save hassle and time later on when installing
other modules.
Get a list of the commands in the module to confirm everything is all working as expected.
The #Requires statement ensures that users who run the code have the correct version of
PowerShell installed.⁷
There are some sample search functions available that call Invoke-StarWarsApi. Only a subset
of the API functionality is covered.
You can find the code from Examples 2 and 3 in the StarWarsData.ps1⁸ file of the Extras
repository for this book on GitHub.
Example 2: Three search functions that call the API interface function
1 function Search-SWPerson {
2 param (
3 [Parameter(Mandatory)]
4 [string] $Name
5 )
6 # load all the people
7 $response = Invoke-StarWarsApi -objectType People
8 # filter on the name
9 $results = $response | Where-Object name -like "*$Name*"
10
11 if ($null -eq $results) {
12 Write-Output @{ Error = "No person results found for '$Name'."}
13 }
14 else {
15 # return all matches with some properties
16 $personDetails = $results | ForEach-Object {
17 Invoke-StarWarsApi -objectType People -id $_.id
18 }
19
20 Write-Output $personDetails | Select-Object @{
21 Name = "id";
22 Expression = { $_.id}
23 }, name, gender, height,
24 @{
25 Name = "weight"
26 Expression = {$_.mass}
27 }
28 }
29 }
⁸https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/blob/main/Edition-01/Starwars-
Demo/src/StarWarsData.ps1
The AAA Approach 60
1 function Search-SWPlanet {
2 param (
3 [Parameter(Mandatory)]
4 [string] $Name
5 )
6 # load all the planets
7 $response = Invoke-StarWarsApi -objectType Planets
8 # filter on the name
9 $results = $response | Where-Object name -like "*$Name*"
10
11 if ($null -eq $results) {
12 Write-Output @{ Error = "No planet results found for '$Name'."}
13 }
14 else {
15 $planetDetails = $results | ForEach-Object {
16 Invoke-StarWarsApi -objectType Planets -id $_.id
17 }
18 # return all matches with some attributes
19 Write-Output $planetDetails | Select-Object @{
20 Name = "id";
21 Expression = {$_.id}
22 },
23 name,
24 population,
25 diameter,
26 terrain
27 }
28 }
1 function Search-SWFilm {
2 param (
3 [Parameter(Mandatory)]
4 [string] $Name
5 )
6 # load all the films (currently does not include the new trilogy)
7 $response = Invoke-StarWarsApi -objectType Films
8 # filter on the name
9 $results = $response | Where-Object title -like "*$Name*"
10
11 if ($null -eq $results) {
12 Write-Output @{ Error = "No film results found for '$Name'."}
13 }
14 else {
15 # return all matches with some attributes
16 $filmDetails = $results | ForEach-Object {
17 Invoke-StarWarsApi -objectType Films -id $_.id
18 }
19 Write-Output $filmDetails | Select-Object @{
20 Name="id";
21 Expression = { $_.id}
22 },
23 title,
24 director,
25 release_date,
26 characters,
27 planets
28 }
29 }
There is a single function that wraps multiple calls and returns a composite object.
The AAA Approach 61
Example 3: A more complex function that builds composite objects using multiple API calls
1 function Get-SWPerson {
2 param (
3 [Parameter(Mandatory)]
4 [int] $Id
5 )
6 # get the person
7 $person = Invoke-StarWarsApi -objectType People -id $Id
8
9 if ($null -eq $person)
10 {
11 Write-Output @{
12 Error = "Unable to find a person record given Id: $Id"
13 }
14 }
15 else {
16 # get the homeworld planet and the films
17 $planet = Invoke-StarWarsApi -objectType Planets -id $person.homeworld
18 $films = Invoke-StarWarsApi -objectType Films
19
20 # get detailed info of all films
21 $filmDetails = $films | ForEach-Object {
22 Invoke-StarWarsApi -objectType Films -id $_.id
23 }
24
25 # build the result object as a mix of all the data returned
26 $result = [PSCustomObject]@{
27 Name = $person.Name
28 BodyType = $person |
29 Select-Object height, mass, gender, skin_color, eye_color
30 HomeWorld = $planet |
31 Select-Object name, population, gravity, terrain
32 Films = $filmDetails |
33 Where-Object people -contains $person.id |
34 Select-Object title, director, release_date
35 }
36 Write-Output $result
37 }
38 }
id : 1
name : Luke Skywalker
gender : male
height : 172
weight : 77
id : 9
name : Anakin Skywalker
gender : male
height : 188
weight : 84
id : 27
name : Shmi Skywalker
gender : female
height : 163
weight : unknown
When executing Get-SWPerson using the ID from Anakin Skywalker above, it returns an object
with multiple properties:
Example 5: Calling the Get-SWPerson function with the ID for Anakin Skywalker
Get-SWPerson -Id 9 | Format-List
You can find the code from Example 6 in the StarWarsData.Simple.Tests.ps1¹⁰ file of the
Extras repository for this book on GitHub.
The comments above detail which parts of the script correspond to the Arrange, Act, and Assert.
The Arrange sections do the following:
The Act sections call the function Search-SWPerson, which is the one being tested.
The Assert sections utilize various Pester commands to confirm that the result of the Act section
is as expected. The Should command has many parameters like -Be, -BeLike, or -HaveCount
that make the assertion code read like English.
See this help page¹¹ for more details.
Simple tests like these are ideal, as they’re straightforward to follow, which reduces the time
taken for developers to understand the test process.
Describing Search-SWPlanet
[+] Returns a single match 676ms (671ms|5ms)
[+] Returns no matches 132ms (130ms|1ms)
[+] Returns multiple matches 429ms (427ms|1ms)
Tests completed in 2.75s
Tests Passed: 6, Failed: 0, Skipped: 0 NotRun: 0
¹¹https://pester.dev/docs/commands/Should
¹²Pester Team. (2021, Apr. 17). Invoke-Pester - Output. Pester Docs. [Online]. Available: https://pester.dev/docs/commands/Invoke-
Pester#-output. [Accessed: Apr. 27, 2022].
¹³Pester Team. (2021, May. 15). VSCode. Pester Docs. [Online]. Available: https://pester.dev/docs/usage/vscode. [Accessed: Apr. 27,
2022].
The AAA Approach 65
You can find the code from Example 8 in the StarWarsData.Mocked.Tests.ps1¹⁵ file of the
Extras repository for this book on GitHub.
Example 8: Tests for the Search-SWPerson function that use mocking to simulate the API
1 # Arrange
2 BeforeAll {
3 . $PSCommandPath.Replace('.Mocked.Tests.ps1','.ps1')
4
5 Mock Invoke-StarWarsApi {
6 $output1 = [PSCustomObject]@{
7 id = 4
8 name = 'Darth Vader'
9 gender = 'male'
10 height = '202'
11 weight = '136'
12 }
13 $output2 = [PSCustomObject]@{
14 id = 1
15 name = 'Luke Skywalker'
16 gender = 'male'
17 height = '172'
18 weight = '77'
19 }
20 $output3 = [PSCustomObject]@{
21 id = 9
22 name = 'Anakin Skywalker'
23 gender = 'male'
24 height = '188'
25 weight = '84'
26 }
27 Write-Output @($output1, $output2, $output3)
28 } -Verifiable -ParameterFilter { $objectType -eq 'People'}
¹⁴https://pester.dev/docs/commands/Mock#-parameterfilter
¹⁵https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/blob/main/Edition-01/Starwars-
Demo/src/StarWarsData.Mocked.Tests.ps1
The AAA Approach 66
1 Mock Invoke-StarWarsApi {
2 $output = [PSCustomObject]@{
3 height = '172'
4 mass = '77'
5 hair_color = 'blonde'
6 skin_color = 'fair'
7 eye_color = 'blue'
8 birth_year = '199BBY'
9 gender = 'male'
10 name = 'Luke Skywalker'
11 homeworld = 1
12 id = 1
13 }
14 Write-Output @($output)
15 } -Verifiable -ParameterFilter { $objectType -eq 'People' -and $id -eq 1}
16
17 Mock Invoke-StarWarsApi {
18 $output = [PSCustomObject]@{
19 height = '202'
20 mass = '136'
21 hair_color = 'none'
22 skin_color = 'white'
23 eye_color = 'yellow'
24 birth_year = '41.9BBY'
25 gender = 'male'
26 name = 'Darth Vader'
27 homeworld = 1
28 id = 4
29 }
30 Write-Output @($output)
31 } -Verifiable -ParameterFilter { $objectType -eq 'People' -and $id -eq 4}
32
33 Mock Invoke-StarWarsApi {
34 $output = [PSCustomObject]@{
35 height = '188'
36 mass = '84'
37 hair_color = 'blonde'
38 skin_color = 'fair'
39 eye_color = 'blue'
40 birth_year = '41.9BBY'
41 gender = 'male'
42 name = 'Anakin Skywalker'
43 homeworld = 1
44 id = 9
45 }
46 Write-Output @($output)
47 } -Verifiable -ParameterFilter { $objectType -eq 'People' -and $id -eq 9}
15 $testName = 'Invalid'
16
17 # Act
18 $result = Search-SWPerson -Name $testName
19
20 # Assert
21 $result.Error | Should -Be "No person results found for '$testName'."
22 }
23 It "Returns multiple matches" {
24 # Arrange
25 $testName = 'walker'
26
27 # Act
28 $result = Search-SWPerson -Name $testName
29
30 # Assert
31 $result.Count | Should -BeGreaterThan 1
32 $result.Name -like "*$testName*"| Should -HaveCount $result.Count
33 }
34 }
The Invoke-Pester command is used to run Pester Tests. For More information, refer
to: 5.4.6.2 Command Line in Unit Testing.
The tests above are tagged with both Unit and Mocked, so they can be filtered if required. When
you run Invoke-Pester, you can provide a -Tag <name> parameter to filter which tests are
run based on the tag.¹⁶
For the script above, if you wanted to run just mocked tests, the command line would be
something like:
Any configuration of Mocked functions in Pester would be in the Arrange section of AAA. Care
needs to be taken when mocking functions, as they will need to be maintained if the source
system they’re mocking gets updated over time.
Mocked API:
¹⁶Pester Team. (2022, Jun. 19). Tags. Pester Docs. [Online]. Available: https://pester.dev/docs/usage/tags. [Accessed: Sep. 04, 2022].
The AAA Approach 68
You can find the code from Example 9 in the StarWarsData.Complex.Tests.ps1¹⁷ file of
the Extras repository for this book on GitHub.
Example 9: More complex tests covering more functions, using test cases to generate multiple tests
1 # Arrange
2 BeforeAll {
3 . $PSCommandPath.Replace('.Complex.Tests.ps1','.ps1')
4 }
5
6 Describe 'Search-SWFilm' -Tag 'Unit' {
7 $itName = "Returns film with release date '<year>' & director " +
8 "'<director>' given title fragment '<name>'"
9
10 It $itName -TestCases @(
11 # Arrange
12 @{
13 name = 'Phantom'
14 year = '1999-05-19'
15 director = 'George Lucas'
16 }
17 @{
18 name = 'Empire'
19 year = '1980-05-17'
20 director = 'Irvin Kershner'
21 }
22 @{
23 name = 'Return'
24 year = '1983-05-25'
25 director = 'Richard Marquand'
26 }
27 ) {
28 # Act
29 $result = Search-SWFilm -name $name
30
31 # Assert
32 $result.Count | Should -Be 1
33 $result.title | Should -BeLike "*$name*"
34 $result.release_date | Should -Be $year
35 $result.director | Should -Be $director
36 }
37 }
¹⁷https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/blob/main/Edition-01/Starwars-
Demo/src/StarWarsData.Complex.Tests.ps1
The AAA Approach 69
One advantage of using Pester tests with predefined TestCases¹⁸ is you can inject properties into
the test name. This gives us a more dynamic description rather than the static names provided
in more simplistic tests. These are called templates¹⁹.
¹⁸https://pester.dev/docs/commands/It#-testcases
¹⁹https://pester.dev/docs/usage/data-driven-tests#using--templates
The AAA Approach 70
Describing Get-SWPerson
[+] Returns person metadata for 'Darth Maul' with gender 'male',
eye colour 'yellow' & film count of 1 3.96s (3.96s|5ms)
[+] Returns person metadata for 'Luke Skywalker' with gender 'male',
eye colour 'blue' & film count of 6 4.37s (4.37s|2ms)
[+] Returns person metadata for 'Mon Mothma' with gender 'female',
eye colour 'blue' & film count of 1 4.19s (4.19s|3ms)
Tests completed in 16.11s
Tests Passed: 6, Failed: 0, Skipped: 0 NotRun: 0
3.5 Conclusion
As seen from the code samples above, implementing the AAA approach using Pester is straight-
forward. Pester’s use of commands with parameters that give English-like syntax goes a long
way to make it easy to read and understand code, helping developers understand and write tests
quickly.
²⁰https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/Starwars-Demo/
²¹https://github.com/pester/Pester
²²https://pester.dev/docs/quick-start
²³https://developers.mews.com/aaa-pattern-a-functional-approach/
²⁴https://www.powershellgallery.com/
4. Mocking
Mocking is an approach to testing that involves the replacement of dependencies with simulated
equivalents. Most useful at the unit testing stage, mocking ensures consistent behavior of
functions or data that the code unit you’re testing depends on.
• Performance. Often, external APIs have a latency or take substantial processing time. This
is also true for complex local functions that perform many operations. By simulating the
behavior of these locally, tests will run much quicker and use fewer processor cycles. This
saves both time and, in the case of third-party hosted test runners, money.
• Consistency. The reliability of dependencies outside of the code unit you’re testing is
assumed during unit tests. Using real dependencies can result in unexpected behavior and
cause false test passes or failures. By simulating these, you only need to worry about the
code in the test, since you know for sure what your mocked dependencies are doing.
• Analysis. Without modification, there may be no way to monitor when or how often a
dependency is called. You can monitor mocked dependencies and create assertions to check
that the code you’re testing has called them.
• Isolation. A unit test should never have downstream effects on production environments.
Mocking prevents this by eliminating real calls to external functions or APIs.
Example 1: A function that updates a local data store from a remote one
1 function Update-DataStore {
2 [CmdletBinding()]
3 param (
4 [Parameter(Mandatory)][string]$Name,
5 [Parameter(Mandatory)][string]$Source
6 )
7 process {
8 # Get list of updates from API
9 $RequestUri = '{0}/{1}' -f $Source.TrimEnd('/'), 'updates'
10 $Updates = (Invoke-RestMethod -Uri $RequestUri).updates.date
11
12 # Determine latest update and parse unix timestamp
¹M. Fowler. (2007, Jan. 02). Mocks Aren’t Stubs. martinFowler.com. [Online]. Available: https://martinfowler.com/articles/
mocksArentStubs.html. [Accessed: Jun. 13, 2022].
71
Mocking 72
13 [int64]$Latest = 0
14 $Updates.ForEach{ if ($_ -gt $Latest) { $Latest = $_ } }
15 $UnixEpoch = [datetime]::new(1970, 1, 1, 0, 0, 0, 0, 1)
16 $Update = $UnixEpoch.AddMilliseconds($Latest)
17
18 # Get local data store
19 $Store = Get-DataStore -Name $Name
20
21 # Compare updates, and stop if local store is up-to-date
22 if ($Update -le $Store.Update) { return }
23
24 # Get data of latest update from API
25 $RequestUri = '{0}/{1}/{2}' -f $Source.TrimEnd('/'), 'data', $Latest
26 [psobject]$NewData = Invoke-RestMethod $RequestUri
27
28 # Save new data to local store
29 Set-DataStore -Name $Name -Data $NewData.data -Update $Update
30 }
31 }
The real functionality of Get-DataStore, Set-DataStore, and the API doesn’t matter
for this example, but you can view a functional demonstration² in the Extras³ repository
for this book on GitHub.
Unit tests on this function should focus on the state of the data store after execution, verifying
that the function updates the correct one based on the -Name parameter and that updates only
occur if the local store is outdated. You can, of course, use mocking for these tests to eliminate
API calls with Invoke-RestMethod and ensure that the test operates on a dummy data store.
Mock tests, on the other hand, should focus on the calls to Get-DataStore, Set-DataStore,
and Invoke-RestMethod to verify that the function calls them the correct number of times and
with the correct parameters.
• Fake: Any object, function, or API that replaces a real dependency in tests. Under some
definitions, a fake refers to a fully comprehensive simulation of a dependency, operationally
compatible with the real behavior.
• Stub: A simple fake of a dependency that provides predetermined responses or values and
can’t fail a test. A validation stub might always return $true, for example.
²https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/Mocking/
DataStoreDemo/
³https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/
⁴R. Osherove. (2013, Nov.). The Art of Unit Testing. 2nd ed. Germany: Manning. ISBN: 9781617290893.
Mocking 73
• Mock: A more complex fake of a dependency that provides dynamic responses and can fail
a test. A mocked database API might fail a test if it receives an invalid query, for example.
• Seam: A place in the code you’re testing where a dependency or functionality is interchange-
able. An example of this from Example 1 is the call to Get-DataStore. You could redirect
this to a stub that always returns a valid store object regardless of the -Name parameter.
The Pester documentation uses the term mock universally,⁵ so the rest of the chapter assumes
that mock can refer to any kind of fake.
The command must exist in the scope of the current block where you’re defining the mock. For
example, the Get-DataStore function from Example 1 might look like this:
⁵Pester Team. (2022, Jun. 25). Mocking with Pester. Pester Docs. [Online]. Available: https://pester.dev/docs/usage/mocking. [Accessed:
Sep. 04, 2022].
⁶Pester Team. (2022, Mar. 06). Pester - Mock.ps1. L228-L261. Pester/Pester on GitHub. [Online]. Available: https://github.com/pester/
Pester/blob/main/src/functions/Mock.ps1. [Accessed: Jun. 14, 2022].
Mocking 74
To create a simple mock (stub) for this function, you can return a similar object without accessing
any real data.
From Pester 5.0 and upwards, you don’t need to include a param() block in your mock
script. Pester copies the parameters from the real function. This is also the case when
you use the -TestCases parameter with It.⁷
If you work with earlier versions of Pester, you need to add param ($Name) to the top of the
mock script.
The mock script returns the same kind of object as the real function, but doesn’t check to see
if a real store exists with the passed name, nor tries to access any real data. Instead, it returns
the name passed with the $Name parameter as if it was a valid data store name, along with an
empty Data property and Update value of exactly one day ago. Running a few tests with random
names demonstrates that the mock is now receiving the function calls.
⁷Pester Team. (2021, May. 14). Migrating from Pester v4 to v5. Pester Docs. [Online]. Available: https://pester.dev/docs/migrations/v4-
to-v5. [Accessed: Jun. 16, 2022].
Mocking 75
Pester v5.3.0
Find the Examples3and4.Tests.ps1⁸ file used in this example on the Extras⁹ repository on
GitHub.
It’s important that you define the mock in the same scope or a parent scope of the tests that use it.
It must also be inside a BeforeEach, BeforeAll, or It block. The chapter covers mock scoping
in more detail later.
⁸https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/blob/main/Edition-01/Mocking/
DataStoreDemo/Examples3and4.Tests.ps1
⁹https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/
Mocking 76
The -Times and -Exactly parameters coupled with -Not provide full control over the number of
times that your tests can call a mock. Should -Not -Invoke ... without -Times or -Exactly
means the tests shouldn’t call the mock at all in this scope. By default, Pester assumes you mean
calls to the mock made only in the current scope. In Example 5, this means the It block where
the Should assertions are.
To change the target scope of the assertion, use the -Scope parameter. This accepts either It,
Context, Describe, or a positive integer.
Since you can nest Context and Describe blocks in many ways, the nth parent mode is useful
for selecting an exact parent scope of the current It block. If you have one Context block inside
another, for example, you can’t target the outer one with -Scope Context.
The next example creates some more It tests, but inside nested Context blocks within the
Describe block.
Mocking 77
The two assertions in the first It block (Example 6a) target only the scope of the It block
itself. This is the same as the default behavior when -Scope isn’t present. The assertions in
the second It block (Example 6b) target the inner Context block, called ‘An inner scope’. This
demonstrates that -Scope Context selects the innermost parent of that type, and the same is
true with Describe.
Example 6c is where things get a little trickier. Since the It block is inside nested Context blocks,
the first assertion fails because the target scope is the inner one. Instead of the expected two calls,
it only sees one. To target the outer Context block, called ‘Scope tests’, you must use a numerical
scope. In the case of Example 6c, -Scope 1 is the inner Context block, -Scope 2 is the outer
Context block, and -Scope 3 is the Describe block (not shown).
If you pass a scope number of four, you’ve now reached the root scope of the test file, which
includes other Describe and Contexts blocks that run before this one. Therefore, calls to a
mock of the same name in those separate blocks will now count towards Should -Invoke, even
if defined with a different mock inside a different Before* block. You can’t go further than the
root scope, so scope numbers larger than this still target the root scope.
Mocking 78
When Pester comes across the assertion, the test only passes if tests called all verifiable mocks
before that point.
Since Pester evaluates the assertions at the time it comes across them, take care with your
placement of -Invoke and -InvokeVerifiable. Place them after any mock calls that you want
Pester to count.
Pester v5.3.0
Find the Examples5to8.Tests.ps1¹⁰ file used in this example on the Extras¹¹ repository on
GitHub.
The test results confirm the assertions are scoped correctly. Notice also that the indentation of
the test result output varies based on the nesting of the blocks in the test file. The final (verifiable
mock assertion) test is inside the ‘Scope tests’ block, but not inside the ‘An inner scope’ block,
for example. This output can prove useful when diagnosing scoping errors for mock-related
assertions.
• If you place the Mock... declaration in a BeforeAll block, it applies to the whole of the
current Context or Describe.
• You shouldn’t place a Mock... declaration directly inside a Context or Describe block,
in line with Pester 5.0’s new discovery and run best practices.¹³
• You shouldn’t place a Mock... declaration inside BeforeDiscovery blocks—mocks don’t
function in the discovery phase since it’s only for inspecting the test file structure and
generating tests.
• Placing a Mock... declaration in AfterEach or AfterAll blocks is ineffective since Pester
resets the scope between tests. The section discusses this later.
Example 10a places an It block before the BeforeAll, but the mock is still available as
BeforeAll runs before any tests. Example 10b returns the same result and confirms that mocks
placed in BeforeAll apply to every It in the current Context or Describe. Example 10c
demonstrates that more locally defined mocks override existing ones. This means any mock
of the same dependency defined in an It block replaces the one that applies to the Context
or Describe. Likewise, any mocks defined in the BeforeAll or BeforeEach block of a nested
(child) block replace those from the parent block.
If not overridden, mocks are inherited by child blocks:
19
20 It 'Unless a more local mock takes precedence' {
21 # Inherits the overridden mock from the Context block
22 $Result = Get-DataStore -Name RandomName
23 $Result | Should -Not -Be 'Mocked'
24 $Result | Should -Not -Be 'Re-mocked'
25 $Result | Should -Be 'Re-mocked again'
26 }
27
28 }
Overriding a mock does so at the current scope and in all child scopes. The Context block in
Example 11a inherits the mock from the parent Describe and, in turn, the It block inherits
it from the Context. In Example 11b, a new mock replaces the inherited one, so the It block
inherits this replacement from the Context.
It’s important to remember that the BeforeAll block runs once before all the tests in the current
scope, whereas BeforeEach runs before every test in the same scope. Because of this, a mock
defined in a BeforeEach overrides one defined in a BeforeAll of the same scope.
Example 12: The execution order of BeforeAll and BeforeEach has consequences for mocks
1 # Same Describe block as Example 10
2 Context 'BeforeEach and BeforeAll execution order' {
3
4 BeforeEach {
5 if ($Alt) {
6 Mock Get-DataStore { 'BeforeEach Mock' }
7 }
8 }
9 BeforeAll {
10 Mock Get-DataStore { 'BeforeAll Mock' }
11 }
12
13 It 'Test <Test> should use the Before<Mock> Mock' -TestCases @(
14 @{ Test = 1; Alt = $false; Mock = 'All' }
15 @{ Test = 2; Alt = $true; Mock = 'Each' }
16 @{ Test = 3; Alt = $false; Mock = 'All' }
17 ) {
18 Get-DataStore -Name RandomName | Should -Be "Before$Mock Mock"
19 }
20
21 }
Example 12 runs three tests, and the mock they each use changes between them. Why is this?
Looking at the execution order, it becomes clear how and when the original mock gets overridden.
1. The BeforeAll of the Context block runs and overrides the original mock from the
Describe block
2. The BeforeEach of the Context block runs before Test 1, but the $Alt variable is False
so nothing happens
3. Test 1 runs and inherits the mock from the BeforeAll.
4. The BeforeEach runs before Test 2, and the $Alt variable is True this time, so it defines a
new mock
Mocking 82
5. Test 2 runs and inherits the replaced mock from the BeforeEach that just ran
6. The BeforeEach runs before Test 3, and the $Alt variable is again False, so nothing
happens
7. Test 3 runs and inherits the mock from the BeforeAll.
So, why does Test 3 not inherit the new mock, since the BeforeEach defines it earlier, before
Test 2? The answer is that Pester 5.0 runs each test in a new scope inherited directly from the
parent scope.¹⁴ Therefore, each generated test can’t communicate with others, and each run of
BeforeEach is specific to that test. The effects of the BeforeEach that runs for Test 2 end when
the test does, and the scope ‘resets’ to the parent Context one. When Test 3 runs, the environment
is exactly as it was before Test 2 and Test 1, so when BeforeEach does nothing, the mock created
in the BeforeAll is what gets inherited.
Considering the isolated nature of individual test scopes, it becomes apparent that defining mocks
in AfterAll or AfterEach blocks is useless for their intended purpose. Pester would create the
mock after all or each of the tests and immediately remove it as the test or block scope was torn
down.
Running the tests from Examples 10 to 12:
Pester v5.3.0
Find the Examples10to12.Tests.ps1¹⁵ file used in this example on the Extras¹⁶ repository
on GitHub.
¹⁴Pester Team. (2022, Jun. 19). Discovery and Run. Pester Docs. [Online]. Available: https://pester.dev/docs/usage/discovery-and-run.
[Accessed: Jun. 21, 2022].
¹⁵https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/blob/main/Edition-01/Mocking/
DataStoreDemo/Examples10to12.Tests.ps1
¹⁶https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/
Mocking 83
PesterBoundParameters
When inside a mock script, the $PSBoundParameters variable doesn’t work because
of how Pester uses proxy functions for mocking.¹⁷ Pester version 5.2 introduces a
functionally equivalent stand-in for this, $PesterBoundParameters. Use this variable
in the same way you would $PSBoundParameters.
Consider the data store example once again. Imagine that the real Get-DataStore function relies
on an internal function, Get-DataStoreFile, to get valid store files from the disk. By mocking
the internal function in the module’s scope, you’ve changed the behavior of the module without
rewriting it.
The following snippet creates a module on-the-fly from the DataStoreFunctions.ps1¹⁹ file, which
the tests can interact with.
Example 14: Creating a module from a script file for use in tests
1 BeforeDiscovery {
2 $GetModuleParams = @{
3 Name = 'DataStoreFunctions'
4 ErrorAction = 'SilentlyContinue'
5 }
6 Get-Module @GetModuleParams | Remove-Module
7 New-Module -Name DataStoreFunctions -ScriptBlock {
8 # Load functions
9 . (Join-Path (Split-Path $PSCommandPath) 'DataStoreFunctions.ps1')
10 # Export public functions
11 $Exports = @{
12 Function = @(
13 'New-DataStore',
14 'Remove-DataStore',
15 'Get-DataStore',
16 'Get-DataStoreDate',
17 'Set-DataStore',
18 'Set-DataStoreDate',
¹⁷Pester Team. (2022, Jun. 25). Mocking with Pester. Pester Docs. [Online]. Available: https://pester.dev/docs/usage/mocking. [Accessed:
Sep. 04, 2022].
¹⁸Pester Team. (2021, May. 13). InModuleScope. Pester Docs. [Online]. Available: https://pester.dev/docs/commands/InModuleScope.
[Accessed: Jun. 22, 2022].
¹⁹https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/blob/main/Edition-01/Mocking/
DataStoreDemo/DataStoreFunctions.ps1
Mocking 84
19 'Update-DataStore'
20 )
21 }
22 Export-ModuleMember @Exports
23 } | Import-Module -Force
24 }
If you try to use Get-DataStore with an invalid name, it throws an error because it’s currently
using the real Get-DataStoreFile internal function.
Example 15: The module uses its real internal functions without mocking
1 Describe 'Mocking in modules' {
2
3 It 'Throws an error because real private function called' {
4 { Get-DataStore -Name SomeName } | Should -Throw
5 }
6
7 }
You can’t use Mock for Get-DataStoreFile because it’s not exported as a public module
member.
[-] Can't mock a private function in the test scope 59ms (58ms|0ms)
CommandNotFoundException: Could not find Command Get-DataStoreFile
Using the -ModuleName parameter of Mock, you can create a mock of the private function, and
return a valid dummy store file. To make this work, you need to create the dummy data store in
the BeforeAll section. You can use Pester’s test drive for this, which is a temporary PSDrive
specifically for tests.²⁰
²⁰Pester Team. (2022, Jun. 19). Isolating File Operations using the TestDrive. Pester Docs. [Online]. Available: https://pester.dev/
docs/usage/testdrive. [Accessed: Jun. 21, 2022].
Mocking 85
Example 17: Creating the dummy data store using Pester’s test drive
1 BeforeAll {
2 $DummyXMLParams = @{
3 Path = 'TestDrive:\DummyDataStore.xml'
4 InputObject = [pscustomobject]@{}
5 }
6 Export-Clixml @DummyXMLParams
7 }
Example 18: Mocking an internal function using a dummy file and Mock -ModuleName
1 # Same Describe block as Example 15
2 It 'Returns the dummy data when private function mocked' {
3 Mock Get-DataStoreFile {
4 Get-Item 'TestDrive:\DummyDataStore.xml'
5 } -ModuleName DataStoreFunctions
6 $Store = Get-DataStore -Name SomeName
7 $Store.Name | Should -Be 'SomeName'
8 $Store.Data | Should -BeNullOrEmpty
9 }
[+] Returns the dummy data when private function mocked 15ms (15ms|0ms)
You can achieve the same result using InModuleScope, either just for the creation of the mock
or by running the entire test in the module scope.
You can use InModuleScope anywhere that you’d usually place code in a Pester test file. You
can also use it to wrap entire Describe or Context blocks, in order to run entire test programs
in the scope of the module:
Mocking 86
Pester v5.3.0
Find the Examples14to21.Tests.ps1²¹ file used in this example on the Extras²² repository
on GitHub.
The filter script is a script block that must return True in order for the mock to accept and handle
the call. Think of this as similar to the script block you would pass to Where-Object.
Consider the Update-DataStore function from Example 1. It makes two calls to Invoke-
RestMethod. One of these gets the list of updates to a remote data store, and the other gets
the contents of a single update. Any unit tests of the Update-DataStore function need the
dependency to behave consistently across tests, but differently according to the URI parameter.
The real API returns an object containing an array of update objects, each made of a GUID and
a Unix time stamp in milliseconds. The mock in the example replicates this with two updates
containing empty (zero) GUIDs and random time stamps in a predefined range. Returning two
fake updates with differing time stamps means that tests can ensure Update-DataStore is
selecting the latest updates correctly.
The new feature of this example is, of course, the -ParameterFilter parameter. Recall that
Pester copies the parameters of the real dependency in mocks, and the same is true for filter
scripts. The filter script here checks whether the -Uri parameter ends with ‘/updates’. Take a
look back at Example 1 and you’ll be able to see how this will ‘catch’ the right calls.
To mock the calls to Invoke-RestMethod that receive the new data store contents, a second
mock is necessary.
The filter script in this example matches any calls where the URI ends with ‘/data/ ’ and only
numbers. Since the real API supplies data stores based on the Unix time stamps, this mock will
intercept any attempts to retrieve new data store contents. The mock script itself simply provides
a fake data store object with fixed data, and a date that’s taken from the -Uri parameter passed
to it.
This just leaves the behavior when the URI doesn’t resemble an update or data store request at
all. With no additions to the test script, Pester passes these calls on to the real dependency. This
is undesirable given the purpose of unit tests, so you can use a third mock without a parameter
filter to catch any other calls.
This mock immediately throws an error in order to fail the test, since any URI passed that doesn’t
match the first two mocks is invalid for the real API. The extra code uses $PesterBoundParame-
ters to list any parameter keys and values passed in the invalid scenario, which helps to diagnose
and correct a test failure.
You’re almost ready to use these mocks in unit tests. An important thing to remember is full
dependency coverage. There aren’t any mocks for Get-DataStore or Set-DataStore in this
file, currently, and Update-DataStore calls both. Focusing on fully covering one dependency
and forgetting to account for others is a common unit testing mistake.
The next example creates a dummy data store that the other mocks access.
Notice how the values of the dummy store at definition differ from the fake values that
the Invoke-RestMethod mocks return. The following tests use these to verify that Update-
DataStore writes the correct values to the dummy store.
Example 27: Unit tests for Update-DataStore using the filtered mocks
1 # Same Describe block as Example 22
2 Context 'Unit tests' {
3
4 BeforeEach {
5 Update-DataStore -Name $DummyName -Source 'https://example.com'
6 }
7
8 It 'Accesses the right data store' {
9 $DummyStore.Name | Should -BeExactly $DummyName
10 }
11 It 'Chooses the latest update' {
12 $UnixEpoch = [datetime]::new(1970, 1, 1, 0, 0, 0, 0, 1)
13 $Later = $UnixEpoch.AddMilliseconds($LaterUnix)
14 $DummyStore.Update | Should -BeExactly $Later
15 }
16 It 'Adds new values to the data store' {
17 $DummyStore.Data.Property1 | Should -Be 'Value 1'
18 $DummyStore.Data.Property2 | Should -Be 'Value 2'
19 }
20
21 }
The tests in this example run Update-DataStore three times, once for each It block. The first
call should update the dummy data store with the values from the second Invoke-RestMethod
mock. The name stored in $DummyStore therefore changes from ‘Not Set’ to the randomly
generated name from Example 26. The two subsequent tests shouldn’t attempt to update the
dummy store again, since the date it contains now matches that from the mock. Later tests can
confirm this with mock assertions.
The second test causes Update-DataStore to run again, but, as discussed in the previous
paragraph, no values should change. This test checks that the dummy store’s date has changed
from the earliest that DateTime can represent, to the date represented by $LaterUnix. The test
uses the same method to convert the Unix time stamp from $LaterUnix to a DateTime, so these
dates should match if Update-DataStore is accessing and converting this value correctly. The
third test causes a third run, and this time checks that Update-DataStore has added the two
properties and values from the mock to the dummy data store.
Note that the three assertions in the example will work just as well if placed in the first It block.
All the data these tests look for should have been set in the first call to Update-DataStore.
Using three It blocks provides more granular results in the event of a single assertion failing
and has the added benefit of calling Update-DataStore multiple times, since the call is in a
BeforeEach block. The multiple calls support the following mock tests, which check that the
second two calls to Update-DataStore result in no further changes to the dummy store, since
it’s up-to-date after the first one.
Mocking 91
Example 28: Mock tests that check how Update-DataStore responds to an updated dummy store
1 # Same Describe block as Example 22
2 Context 'Mock tests' {
3
4 It 'Accesses data stores and API at least once' {
5 Should -InvokeVerifiable
6 }
7 It 'Calls Invoke-RestMethod 4 times overall' {
8 Should -Invoke Invoke-RestMethod -Exactly 4 -Scope Describe
9 }
10 It 'Calls Invoke-RestMethod 3 times for update URL' {
11 Should -Invoke Invoke-RestMethod -ParameterFilter {
12 $Uri -match '.+/updates$'
13 } -Exactly 3 -Scope Describe
14 }
15 It 'Calls Invoke-RestMethod once for data URL' {
16 Should -Invoke Invoke-RestMethod -ParameterFilter {
17 $Uri -match '/data/\d+$'
18 } -Exactly 1 -Scope Describe
19 }
20
21 }
Note first that -InvokeVerifiable isn’t affected at all by parameter filters, only by the -
Verifiable switch of Mock. If a test calls all mocks marked as verifiable, the assertion succeeds.
You could mark only some of your filtered mocks as verifiable, in which case those mocks must
be called for the assertion to succeed. Additional mocks with the same name, but without the
-Verifiable switch, don’t count.
The second test checks Invoke-Restmethod with no filtering, so the assertion counts calls to all
mocks with that name. There should be four calls—three with the update URI and one with the
data URI.
The third and fourth tests check this explicitly with the -ParameterFilter parameter of Should
-Invoke. Note how the third test uses a different parameter filter script than the one for the
mock itself. The filter script for the assertion uses a regex match, whereas the one for the mock
definition uses a wildcard match. This demonstrates that the filters you use for mock assertions
don’t have to be identical to the ones you use for mock definitions. Of course, if no mock exists
that would handle the parameters defined by your assertion filter, the assertion will always fail.
This freedom means you can cover overlapping parameter scenarios for multiple filtered mocks
(of the same dependency) in your assertions. Or, you can test more granular scenarios for the
same mock with individual assertions that have stricter parameter filters.
Finally, note that the Should -Invoke assertions all specify -Scope Describe. Since the calls
to Update-DataStore came from a different Context block (‘Unit tests’), they won’t count in
Mocking 92
this one (‘Mock tests’) due to mock scoping. By specifying the parent Describe of both Context
blocks, the assertions here will count them.
This just leaves a couple of mock tests for Get-DataStore and Set-DataStore. As with the
previous example, there should be three calls to Get-DataStore since three update checks should
occur. Only one call to Set-DataStore confirms that repeated calls to Update-DataStore
resulted in no action on the dummy store.
Example 29: Mock tests for the calls to Get-DataStore and Set-DataStore
1 # Same Describe/Context block as Example 28
2 It 'Calls Get-DataStore three times' {
3 Should -Invoke Get-DataStore -Exactly 3 -Scope Describe
4 }
5 It 'Calls Set-DataStore once' {
6 Should -Invoke Set-DataStore -Exactly 1 -Scope Describe
7 }
Pester v5.3.0
Find the Examples23to29.Tests.ps1²³ file used in this example on the Extras²⁴ repository
on GitHub.
²³https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/blob/main/Edition-01/Mocking/
DataStoreDemo/Examples23to29.Tests.ps1
²⁴https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/
Mocking 93
• The tests call the mock the number of times specified with -Times or -Exactly
• No other calls to the mock that don’t match the filter occur
This means that you can only use -ExclusiveFilter once for each mock in the current scope.
Any other assertions for the same mock and scope that use -ParameterFilter become useless.
Any mock calls that would cause additional assertions to pass, will cause the -ExclusiveFilter pass
to fail. This makes the filter useful when it’s imperative that a dependency call occurs only under
a single set of conditions. Examples include where the function under test transmits sensitive data
or makes important changes.
The exclusive filter ensures that no other calls to Export-Clixml occur, and also that the
matching call uses the correct file path. Note once again that the assertion specifies an explicit
scope (Context). The call to Set-DataStore happens in the BeforeAll block of Context, so
the mock call also happens in the Context scope, not the It scope.
You can use additional exclusive filters in assertions for other mocks.
Mocking 94
Example 32: ExclusiveFilter excludes nonmatching calls to the same mock and in the same scope
1 # Same Context block as Example 31
2 It 'Calls Set-ItemProperty with correct path, name, and value once' {
3 Should -Invoke Set-ItemProperty -Exactly 1 -ExclusiveFilter {
4 $LiteralPath -eq $ExpectPath -and
5 $Name -eq 'LastWriteTimeUtc' -and
6 $Value -eq $ExpectDate
7 } -Scope Context
8 }
The test script declares the $ExpectDate and $ExpectPath variables in the BeforeAll block,
so these are available to the entire Context block and all It blocks it contains. Run the tests to
confirm this:
Pester v5.3.0
Find the Examples31and32.Tests.ps1²⁵ file used in this example on the Extras²⁶ repository
on GitHub.
In this example, the mocks modify incoming file paths to ensure data is only written to Pester’s
test drive. They then pass on the safe values to the underlying dependencies that Get-Command
returned. Additional tests confirm that Set-DataStore has written the correct data to the test
drive as if it was a real data store location.
Example 35: More tests to confirm the contents of the new test drive file
1 # Same Context block as Example 33
2 It 'Writes correct date to data store attributes' {
3 $File = Get-Item -LiteralPath $ExpectPath
4 $File.LastWriteTimeUtc | Should -Be $ExpectDate
5 }
6
7 It 'Stores PSCustomObject with property a = 1' {
8 [xml]$Data = Get-Content -LiteralPath $ExpectPath
9 $Types = $Data.Objs.Obj.TN.ChildNodes
10 $Types.Count | Should -Be 2
11 $Types[0].'#text' |
12 Should -Be 'System.Management.Automation.PSCustomObject'
13 $Values = $Data.Objs.Obj.MS.ChildNodes
Mocking 96
Run the tests to confirm that you can access underlying dependencies in Pester.
Pester v5.3.0
Find the Examples34and35.Tests.ps1²⁹ file used in this example on the Extras³⁰ repository
on GitHub.
²⁹https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/blob/main/Edition-01/Mocking/
DataStoreDemo/Examples34and35.Tests.ps1
³⁰https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/
Mocking 97
[ValidateScript({
$_ -ge [DateTime]::new(2000, 1, 1, 0, 0, 0, 0, 1)
})]
[datetime]$Update
If you mock Set-DataStore and pass invalid parameters to it, the call throws an error.
Example 37: Pester copies parameter typecasting and validation from the real dependency for mocks
1 Describe 'Parameter validation and typecasting in mocks' {
2
3 It 'Mock calls with invalid parameters throw errors' {
4
5 Mock Set-DataStore {
6 # Does nothing
7 }
8
9 # Bad types for -Data and -Update
10 { Set-DataStore -Name 'AName' -Data '' -Update '' } |
11 Should -Throw
12
13 # The value of -Update is before 2000
14 { Set-DataStore -Name 'AName' -Data @{} -Update ([datetime]0) } |
15 Should -Throw
16
17 }
18
19 }
The Mock statement includes two parameters that can make things a little simpler in these
scenarios.
The -RemoveParameterType parameter sets the required type of the parameter value to Object,
from which all PowerShell objects derive. Any parameters you pass to this in your mock
definition will now accept any type. In other words, you’ve disabled typecasting for those
parameters.
The -RemoveParameterValidation parameter, on the other hand, removes the Validate*
attributes from the parameters you pass. This includes the [ValidateScript({...})] for the
-Update parameter of Set-DataStore.
The -Name parameter still retains its string type, as the mock doesn’t specify this one for
typecasting or validation removal. Note also that neither of these parameters makes mandatory
parameters optional. You must still pass non-null values for them, even with validation and
typecasting disabled.
Example 39: The Mandatory attribute isn’t affected by validation or typecasting removal
1 # Same Describe block as Example 38
2 It 'Null values always fail for mandatory parameters' {
3
4 # Name wasn't included in -RemoveParameterValidation
5 # or -RemoveParameterType
6 { Set-DataStore -Name '' -Data '' -Update '' } |
7 Should -Throw
8
9 # Update was included in both, but still can't accept $null
10 { Set-DataStore -Name 'AName' -Data '' -Update $null } |
11 Should -Throw
12
13 }
Run the tests to check that the statements do indeed throw errors as expected:
Pester v5.3.0
Find the Examples37to39.Tests.ps1³¹ file used in this example on the Extras³² repository
on GitHub.
³¹https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/blob/main/Edition-01/Mocking/
DataStoreDemo/Examples37to39.Tests.ps1
³²https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/
Mocking 99
Note how the keys and values common to many applications and also seen in PowerShell
parameters are individual items in the $args array. When mocking native applications, you
need to apply your own argument/parameter processing logic.
³³Microsoft. (2022, Jan. 07). About Automatic Variables (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_automatic_variables#args. [Accessed: Jul.
04, 2022].
Mocking 100
tar is an application used to create and extract from archive files. It’s available in *nix
and in current Windows 10/11 environments.
The two mocks have parameter filters that check the array of arguments for their respective flags
(-x and -t). The mock for -x also checks for the v flag, which would print the names of extracted
files with the real dependency.
As Example 42 shows, you can also use the Should -Invoke assertions with mocks of native
applications. Any filters must again handle arguments through the $args variable.
Pester v5.3.0
Find the Examples41and42.Tests.ps1³⁴ file used in this example on the Extras³⁵ repository
on GitHub.
³⁴https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/blob/main/Edition-01/Mocking/
DataStoreDemo/Examples41and42.Tests.ps1
³⁵https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/
Mocking 101
# Example 44a:
LCID Name DisplayName
---- ---- -----------
2057 en-GB English (United Kingdom)
# Example 44b:
LCID Name DisplayName
---- ---- -----------
1931 en-mm Mocked Culture
# Example 44c:
LCID Name DisplayName
---- ---- -----------
1033 en-US English (United States)
Another feature of mocked objects is that you can retrieve the method call history for all methods.
By default, Pester stores the history as a property of the mocked object, with the same name as
the method but prefixed with an underscore _.
Call Arguments
---- ---------
1 {}
2 {True}
The two method calls (the first made in Example 44) now show up in the _GetConsoleFall-
backUICulture member of the mocked object. Each call has a number starting from 1 and an
array of arguments that the call passed to the mocked method.
To change the prefix that Pester uses for this method call history, use the -
MethodHistoryPrefix parameter.
Example 46: Using an alternate history prefix for mocked .NET object methods
1 $MockSpan = New-MockObject -Type timespan -Methods @{
2 Test = {
3 param ($Value)
4 $Value
5 }
6 } -MethodHistoryPrefix '##'
7
8 $MockSpan.Test('Allons-y, Alonzo!')
9
10 $MockSpan.'##Test'
Mocking 103
Allons-y, Alonzo!
Call Arguments
---- ---------
1 {Allons-y, Alonzo!}
As is apparent in Example 46, you can use type accelerators with New-MockObject. In fact, you
can pass almost all the type names you would to New-Object.
There are some exceptions. The .NET type must support uninitialized object creation by the
serialization library.³⁶ However, since mock objects are most useful for complex data types, you
shouldn’t need to mock primitives.
For objects that can’t be created without calling a constructor, you can instantiate the object
normally and pass it to New-MockObject using the -InputObject parameter. This doesn’t work
with all .NET types, so checking outside of a Pester test might be necessary.
⁴⁶https://pester.dev/docs/migrations/v4-to-v5
⁴⁷https://pester.dev/docs/migrations/breaking-changes-in-v5
5. Unit Testing
5.1 Why Unit Testing?
Whether you’re writing PowerShell scripts or modules for an enterprise or simply for fun, you
should use a Git repository (repo) to store your PowerShell files. Even if it’s only you accessing
the repo, this habit is worth forming. Storing files in Git ensures that there’s a traceable history
of all changes, and it doubles as a backup strategy, giving you the power to revert any code
changes.
While people usually store their scripts in Git repos, they often overlook the need to support
unit testing. Usually, this is due to time or budget constraints in enterprise environments, or
developers don’t consider adding tests a worthwhile use of their time for personal projects.
So, why should you do this? Why should you care? Spending the time and effort to create unit
tests may seem overkill and unnecessary. For small repos, this may be true. But often, over time,
scripts in repos grow in size and complexity. Adding support for unit testing early on makes
creating new tests easier; you can simply update existing ones as repo complexity increases.
If you delay the addition of unit tests until your repo is more complex, adding them is more
complicated and time-consuming. A bonus is the warm and fuzzy feeling you’ll get when your
repo tests all pass!
105
Unit Testing 106
5.4 Pester
The -Force and -SkipPublisherCheck switches are usually required to install the
latest version of Pester on Windows. This is because version 3.4.0 is installed by default
and is the latest version that Microsoft signed. Later versions are signed with a different
certificate, so adding these switches is necessary.⁸
If you’re struggling to install the latest version of Pester, refer to the installation documentation⁹
for alternative installation approaches and troubleshooting.
Run Get-InstalledModule to confirm a successful installation, and Get-Module to check the
version of Pester loaded into the PowerShell session.
# Example 2a:
Version Name Repository Description
------- ---- ---------- -----------
5.3.3 Pester PSGallery Pester provides a framework for running…
# Example 2b:
ModuleType Version PreRelease Name ExportedCommands
---------- ------- ---------- ---- ----------------
Script 5.3.3 Pester {Add-ShouldOperator, AfterAll, …
⁸Pester Team. (2022, Jun. 22). Installation and Update. Pester Docs. [Online]. Available: https://pester.dev/docs/introduction/
installation. [Accessed: Sep. 09, 2022].
⁹https://pester.dev/docs/introduction/installation
Unit Testing 108
/src/functions/Calculator.ps1
/tests/functions/Calculator.Tests.ps1
This second approach is more useful when you have dozens of files to test.
You can use your own approach to store Pester test files, but not following one of the standard
approaches adds complexity for other maintainers who are familiar with the standard. There
can also be issues with code editors or plugins that expect particular filenames and locations for
determining how to handle and display files. For example, the PowerShell plugin within Visual
Studio Code won’t recognize Pester files correctly if the filename doesn’t end with .Tests.ps1.
¹⁰Pester Team. (2022, Jun. 19). File placement and naming. Pester Docs. [Online]. Available: https://pester.dev/docs/usage/file-
placement-and-naming. [Accessed: Aug. 18, 2022].
Unit Testing 109
Below is a sample Pester test file demonstrating simple, clear, and clean test cases. Documentation
is barely required, as the code itself reads almost like English.
From here, you can define more complex tests. This chapter touches on some more common
enhancements that you can add. However, if you require more detail, refer to the Pester
documentation¹¹.
5.4.4.2.1 Tags
You can add tags to your tests as a parameter to the Describe, Context, and It blocks to provide
grouping.¹² A good example of this is when you only want to run tests tagged with “Unit” locally
and leave “Integration” tagged tests for automated deployments.
¹¹https://pester.dev/docs/quick-start
¹²Pester Team. (2022, Jun. 19). Tags. Pester Docs. [Online]. Available: https://pester.dev/docs/usage/tags. [Accessed: Sep. 04, 2022].
Unit Testing 110
5.4.4.3 -TestCases
If you need to test many scenarios, the -TestCases parameter is more suitable than creating
a separate It block for each one.¹³ Defining the -TestCases parameter with rows of data in a
single It block allows multiple tests to be executed with one code block. You can use an additional
field in the -TestCases data to display accurate test information.
Example 6: Using test cases to reuse test code with various inputs
1 Describe 'Invoke-Addition' -Tag 'Unit' {
2
3 It 'Returns a result of <Sum> given <Desc> (<Values>)' -TestCases @(
4 @{ Desc = 'numbers 1 to 5'; Values = @(1, 2, 3, 4, 5); Sum = 15 }
5 @{ Desc = 'negative numbers'; Values = @(-2, -4, -9); Sum = -15 }
6 @{ Desc = 'only two numbers'; Values = @(10, 5); Sum = 15 }
7 @{ Desc = 'only one number'; Values = @(64); Sum = 64 }
8 ) {
9 # Act
10 $Result = Invoke-Addition -Numbers $Values
11
12 # Assert
13 $Result | Should -Not -BeNullOrEmpty
14 $Result | Should -Be $Sum
15 }
16
17 }
¹³Pester Team. (2019, Jan. 09). It - TestCases. Pester Docs. [Online]. Available: https://pester.dev/docs/commands/It#-testcases. [Ac-
cessed: Apr. 26, 2022].
Unit Testing 111
5.4.5 Mocking
Code that you’re unit testing can sometimes call other external methods or servers. These
external dependencies may not always be available or could have access limitations; for example,
rate limits or long processing times. Pester can mock out calls to these dependencies to test the
internal code without worrying about any external impact.
For more detail on how to mock external interfaces and even internal cmdlets, refer to the
chapters on Mocking and The AAA Approach.
Several text editors are suitable for PowerShell code, but you’re likely developing on Windows,
Linux, or macOS. The most used (and supported) PowerShell editor is Visual Studio Code¹⁴. It’s
free and open-source,¹⁵ cross-platform, flexible, and robust, thanks to a vast library of extensions.
With the free PowerShell plugin¹⁶, Pester support is built into the editor itself. You can see it by
opening test files in VS Code: Run Tests | Debug Tests should appear above any Describe,
Context or It blocks. These two links will execute the tests defined in the related block when
clicked.
A different level of verbosity is defined between the Run Tests and Debug Tests links. Debug
Tests sets the -Output parameter to Diagnostic to provide additional logging details to assist
with debugging the tests.
The output of the tests will appear in the Terminal output pane, which is usually below the editor.
The Visual Studio Code approach is ideal for testing a single Describe block. To run all the tests
or define advanced settings like filtering options, use the command line approach.
The command to run Pester tests is Invoke-Pester. Using the example functions defined above,
you can run tests for them by passing the file path of a test file or a wildcard pattern.
This is suitable for running all the tests, but it doesn’t show much detail in the output. To get
more detail, add the -Output Detailed parameter.
Example 8: Running tests with the -Output parameter to get detailed results
Invoke-Pester -Path *.Tests.ps1 -Output Detailed
Pester v5.3.3
The output provides more details for each test in the Describe block.
To filter which tests are executed by tag, use the -TagFilter and -ExcludeTagFilter param-
eters. Since all tests in this chapter are tagged with ‘Unit’, adding another value to -TagFilter
means no tests run.
Unit Testing 113
Example 9: Running only the tests in blocks with the ‘Integration’ tag
Invoke-Pester -Path *.Tests.ps1 -TagFilter Integration -Output Detailed
Pester v5.3.3
¹⁷Pester Team. (2022, Jun. 19). Configuration. Pester Docs. [Online]. Available: https://pester.dev/docs/usage/configuration. [Accessed:
Sep. 04, 2022].
¹⁸Pester Team. (2022, Apr. 29). Invoke-Pester. Pester Docs. [Online]. Available: https://pester.dev/docs/commands/Invoke-Pester.
[Accessed: Sep. 11, 2022].
Unit Testing 114
To learn more about Pester configuration, refer to the Configuration¹⁹ help article on Pester Docs.
For details on other parameters of Invoke-Pester, refer to the Invoke-Pester²⁰ command article.
The following YAML script can be added to a repo containing the PowerShell scripts used in the
example above. The code below assumes Calculator.ps1 and Calculator.Tests.ps1 files
in a scripts folder in the root folder of a repo stored in Azure DevOps.
There are two steps in this workflow. The first is a PowerShell step that runs the Invoke-
Pester command similar to how you would run it locally. The main difference to the locally
run command is the addition of the -OutputFile './Test-Pester.xml', which will store the
output of the test into the current folder in the default NUnit XML format.²⁸
The failOnStderr argument set to false means the PowerShell step won’t fail if there’s an
error. The reason for this is so the second step can execute and store the failed test report as a
build artifact so it can be viewed for assessment.
²⁶Microsoft. (2022, Apr. 27). Template types & usage. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/azure/de-
vops/pipelines/process/templates?view=azure-devops. [Accessed: Sep. 16, 2022].
²⁷YAML Website Contributors. (2021, Nov. 17). The Official YAML Web Site. YAML.org. [Online]. Available: https://yaml.org/.
[Accessed: Sep. 11, 2022].
²⁸The NUnit Project. (2022). XML Formats. NUnit Docs. [Online]. Available: https://docs.nunit.org/articles/nunit/technical-
notes/usage/XML-Formats.html. [Accessed: Sep. 11, 2022].
Unit Testing 116
The second step uses a preconfigured task and contains properties, including the test output type
(NUnit) and the location of the first step’s output file to use for the test data. This step will fail
the build if there’s a test failure in the results (failTaskOnFailedTests), as it’s not ideal to
continue with the workflow defined in the pipeline.
If you prefer not to run a raw PowerShell command in the Azure pipeline, there’s a
preconfigured task²⁹ available from the Pester project.
The pipeline doesn’t have a trigger, because a Git branch policy will trigger it. To make it
selectable in a Git branch policy, you must add it to Azure DevOps first.
• In Azure DevOps, go to Pipelines → Pipelines and click on the New Pipeline button.
• Select Azure Repos Git.
• Select the name of the repo.
• Select Existing Azure pipelines YAML file.
• Select the branch and the path and click on Continue.
• You can review the resulting YAML code before clicking Run to test the pipeline.
To configure this in Azure DevOps, navigate to: Azure DevOps Repo → Repo Name → Branches
→ main → More options (on right end of row) → Branch Policies.
• Click the Plus (+) Sign to get to the Add build policy screen and set the following options.
– Trigger: Automatic
– Policy requirement: Required
– Build Expiration: Immediately
– Display Name: Run Pester Tests
When there’s a new pull request in the repo, it’ll automatically run Pester tests and store the
results in the Tests tab of the build summary screen. The pull request can’t merge unless all tests
pass.
Unit Testing 118
5.4.8.2 GitHub
GitHub uses GitHub Actions for automation,³⁰ which stores configuration in YAML files in
the .github/workflows folder in a Git repository. GitHub Actions is more of an automation
platform, meaning the list of event triggers is extensive. CI/CD events are a small subsection of
what’s available.
You can place the following YAML script in the workflows folder of a Git repo to add a GitHub
Action that runs Pester tests.
³⁰GitHub. (2022, Aug. 25). Understanding GitHub Actions. GitHub Docs. [Online]. Available: https://docs.github.com/en/actions/learn-
github-actions/understanding-github-actions. [Accessed: Sep. 11, 2022].
Unit Testing 119
There are two triggers for this action. The action runs when a pull request is created or updated,
and you can start it manually via the GitHub web interface or API. It contains a PowerShell step
that calls the Invoke-Pester command and displays the result. A second step exports the test
result file.
GitHub doesn’t have native support for viewing test results, but you can publish the test file as an
artifact.³¹ You can then use a third-party or custom action to process the NUnit XML test results
file. You can also use the -PassThru parameter of Invoke-Pester to display the results as an
annotation in the workflow run log using the ::notice text command in the standard output,³²
Example 13 uses this approach to display the test results.
• On the page for your action, click the workflow run you want to view the job summary
from. Each run is given a unique number.
• On the job summary page, scroll to the Artifacts list and click the artifact you want to
download. In this example, the name is test-results.
The downloaded file is a compressed archive containing the Test-Pester.xml file generated
by Pester.
³¹GitHub. (2022, Jun. 06). Storing workflow data as artifacts. GitHub Docs. [Online]. Available: https://docs.github.com/en/actions/
using-workflows/storing-workflow-data-as-artifacts. [Accessed: Sep. 12, 2022].
³²GitHub. (2022, Sep. 02). Workflow commands for GitHub Actions. GitHub Docs. [Online]. Available: https://docs.github.com/en/
actions/using-workflows/workflow-commands-for-github-actions. [Accessed: Sep. 12, 2022].
Unit Testing 121
The if: conditional with always() in the second step in Example 13 causes the step to run
regardless of any earlier ones failing. This means the test results are always published, even on
test failure. The errors that Pester writes to the error stream also show up as error annotations.
Unit Testing 122
That’s it. GitHub requires nothing else to create the action or link it to a repo’s build process.
From this point of view, it’s easier to create automated workflows on GitHub than on Azure
DevOps.
Example 14: Using the job summaries feature in a GitHub Actions workflow step
1 - name: Perform all Pester Unit Tests from the Scripts folder
2 shell: pwsh
3 run: |
4 $Config = New-PesterConfiguration
5 $Config.Run.Path = 'scripts/*.Tests.ps1'
6 $Config.Filter.Tag = 'Unit'
7 $Config.Output.Verbosity = 'Detailed'
8 $Config.Run.PassThru = $true
9 $Result = Invoke-Pester -Configuration $Config
10 if ($null -eq $Result) { exit 1 }
11 $sb = [System.Text.StringBuilder]::new()
12 $sb.AppendLine('# Pester Test results')
13 $sb.AppendLine('## Summary')
14 $sb.AppendLine('|Item|Result|')
³³https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#adding-a-job-summary
³⁴GitHub. (2022, May. 09). Supercharging GitHub Actions with Job Summaries. GitHub Docs. [Online]. Available: https://github
.blog/2022-05-09-supercharging-github-actions-with-job-summaries/. [Accessed: Sep. 12, 2022].
Unit Testing 123
15 $sb.AppendLine('|----|----|')
16 $sb.AppendLine("|**Pester Version**|$($Result.Version)|")
17 $sb.AppendLine("|**PowerShell Version**|$($Result.PSVersion)|")
18 $sb.AppendLine("|**Executed At**|$($Result.ExecutedAt)|")
19 $sb.AppendLine("|**Overall Result**|$($Result.Result)|")
20 $sb.AppendLine("|**Total**|$($Result.TotalCount)|")
21 $sb.AppendLine("|**Passed**|$($Result.PassedCount)|")
22 $sb.AppendLine("|**Failed**|$($Result.FailedCount)|")
23 $sb.AppendLine("|**Skipped**|$($Result.SkippedCount)|")
24 $sb.AppendLine("|**Not Run**|$($Result.NotRunCount)|")
25 $sb.AppendLine()
26 $sb.AppendLine('## Details')
27 $sb.AppendLine('|Test Name|Skip|Duration (secs)|Result|')
28 $sb.AppendLine('----|----|----|----|')
29 $Result.Tests | ForEach-Object {
30 $sb.AppendLine(
31 "|$($_.ExpandedName)|$($_.Skip)|" +
32 "$($_.Duration.ToString('ss\.fff'))|$($_.Result)|"
33 )
34 }
35 $sb.ToString() | Out-File -Path $ENV:GITHUB_STEP_SUMMARY
The output in this example is relatively simple, but with the advanced formatting features in
Markdown and with Unicode support, you can create comprehensive reports this way.
This completes the comparison between Azure DevOps and GitHub for automating how Pester
tests can be run as part of the Pull Request approval process.
Unit Testing 125
5.5 Conclusion
In this chapter, you’ve seen why Pester is the recommended tool of choice for creating tests for
PowerShell scripts and modules. By exploring what Pester offers, you can create complex and
flexible test code quickly. You can execute tests locally either via the command line or from
within popular text editors. You’ve also learned how to automate build pipelines with Pester for
Azure DevOps and GitHub to ensure high code quality.
³⁵https://pester.dev/docs/quick-start
³⁶https://pester.dev/docs/usage/configuration
³⁷https://docs.github.com/en/actions
³⁸https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/?view=azure-pipelines
³⁹https://resources.github.com/devops/
⁴⁰https://azure.microsoft.com/products/devops/
⁴¹https://www.guru99.com/back-box-vs-white-box-testing.html
⁴²https://github.com/features/actions
⁴³https://code.visualstudio.com/
⁴⁴https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell
6. Parameterized Testing
This chapter describes Parameterized Pester Tests to help you on your PowerShell journey. In
PowerShell, you can use parameters or param in script blocks, functions, and scripts. Parameters
define, transfer, limit, or validate inputs for your script. You can learn more about advanced
function parameters¹ and cmdlet parameters² at Microsoft Docs. A parameterized test is a test
that accepts external data as input. Use parameterized tests to avoid rewriting similar tests. If the
only difference between each of your tests is its inputs, using a parameterized test reduces code
length, improves readability, increases code coverage, and allows for faster changes.
The Pester module has great documentation and everything in this chapter has been inspired by
the original Data driven tests³ article. If you aren’t familiar with the module, there is an excellent
quick start guide⁴.
$(Get-Command Invoke-Pester).Version
Use Install-Module with the -Force parameter to install the latest version of Pester side-by-
side already installed versions:
126
Parameterized Testing 127
The Pester documentation also has a guide on how to uninstall the built-in version⁷ of Pester.
Create a file named Get-PowerShellDate.ps1 and add the following function to the file:
Example 1: A simple function that returns the release dates of various PowerShell versions
1 function Get-PowerShellDate {
2 $VersionList = @{
3 'Windows PowerShell 1.0' = 'Nov 2006'
4 'Windows PowerShell 2.0' = 'Jul 2009'
5 'Windows PowerShell 3.0' = 'Oct 2012'
6 'Windows PowerShell 4.0' = 'Oct 2013'
7 'Windows PowerShell 5.0' = 'Feb 2016'
8 'Windows PowerShell 5.1' = 'Aug 2016'
9 'PowerShell Core 6.0' = 'Jan 2018'
10 'PowerShell Core 6.1' = 'Sep 2018'
11 'PowerShell Core 6.2' = 'Mar 2019'
12 'PowerShell Core 7.0' = 'Mar 2020'
13 'PowerShell Core 7.1' = 'Nov 2020'
14 }
15 return $VersionList[$args]
16 }
This function contains a hash table⁸ that will return the release date of each PowerShell version.
The automatic variable⁹ $args is an array of undeclared positional parameters passed to the
function.
⁷https://pester.dev/docs/introduction/installation#removing-the-built-in-version-of-pester
⁸https://learn.microsoft.com/en-us/powershell/scripting/learn/deep-dives/everything-about-hashtable
⁹https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_automatic_variables#args
Parameterized Testing 128
Nov 2006
Nov 2006
Jul 2009
To test your new function, create a file named Get-PowerShellDate.Tests.ps1 file and add
the following code to the file:
This is the Pester test. The first part is a BeforeAll block where you set prerequisites for your
tests.¹⁰ In this case, it dot sources the ps1 file containing your function under test. The second
part is a Describe block that contains one It block. The third part is an It block where your
test is run. The part of the test to the left of the pipeline is your action, and the part of the test to
the right of the pipeline is your assertion. The result of your action is compared to your assertion,
and the test passes if they match. In this case, Get-PowerShellDate 'Windows PowerShell
1.0' is expected to return Nov 2006. Because Nov 2006 matches what your assertion expects,
the test should pass.
From the PowerShellDate folder, run the test with Invoke-Pester:
¹⁰Pester Team. (2019, Feb. 05). BeforeAll (v4). Pester Docs. [Online]. Available: https://pester.dev/docs/v4/commands/BeforeAll.
[Accessed: Apr. 26, 2022].
Parameterized Testing 129
Add the parameter -Output with the value Detailed, and rerun the test to see more detailed
output::
The detailed output includes the result, name, and execution time of each of your It blocks.
6.2.1 -ForEach
With the -ForEach parameter (an alias of -TestCases) you are able to decouple your test data
from your test actions and assertions.¹¹ Your actions and assertions can be written once, and
reused for each set of data you need to test.
To demonstrate how this improves the readability and maintainability of your tests,
first add an It block for each key/value pair in your function’s hash table. Your Get-
PowerShellDate.Tests.ps1 file should look like this:
You now have an action for all valid input being tested against an assertion for the action’s
expected output. However, your data are tightly coupled with your actions and assertions. If you
want to change your action or assertion, each It block will have to be updated with your change.
Update your tests to use a single It block with the -ForEach parameter and hash tables
containing your data:
The -ForEach parameter accepts an array of hash tables¹². The hash tables contain a key/value
pair for each variable in your test. In this case, the $Name and $Date variables in the test will be
¹²https://devblogs.microsoft.com/scripting/combine-arrays-and-hash-tables-in-powershell-for-fun-and-profit/
Parameterized Testing 131
populated with values of the ‘Name’ and ‘Date’ keys in the hash tables. The <Name> and <Date>
templates in the It block’s description will also be populated with these values.
Run your tests again with detailed output:
Because -ForEach and its array of hash tables is in your It block, a separate test is executed for
each hash table.
The _ inside <_> is equivalent to $_ or $PSItem.¹⁴ It takes each value located in the array
@('Post','B2F1','B2F2','Contoso','Fabrikam') and passes it to the respective It block.
Another example using templates in Describe and It blocks:
¹³https://pester.dev/docs/usage/data-driven-tests#using--templates
¹⁴Microsoft. (2022, Jul. 07). About Automatic Variables (Microsoft.PowerShell.Core) - $_. Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_automatic_variables#_. [Accessed: Aug.
15, 2022].
Parameterized Testing 132
The title variable is now showing its value and also the variables Date and Name. This is helpful
when reading detailed results. In Pester version v5, templates support dot navigation¹⁵. You can
use dot navigation to retrieve values from nested objects.
...
Describing Building
[+] Post does not host Contoso 19ms (10ms|9ms)
Tests completed in 482ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 0
The object defined as a hashtable after the -ForEach displays the third nested level correctly:
this third value, ‘Contoso’, can be seen in the detailed output.
6.2.3 BeforeDiscovery
There are two execution phases¹⁶ in Pester v5. These two phases are named discovery and run¹⁷.
Place code required to setup your test code inside the BeforeDiscovery block.¹⁸ This code is
executed during the discovery phase of Pester execution. Your test code is executed during the
run phase of Pester execution. The results from code executed during the discovery phase are
available to your tests during the run phase.
Move the array of hash tables from the -Foreach parameter in your It block to the BeforeDis-
covery block:
Example 13: Using the BeforeDiscovery section to initialize variables that must be available in both the discovery
and test phases
1 BeforeAll {
2 . $PSCommandPath.Replace('.Tests.ps1', '.ps1')
3 }
4 BeforeDiscovery {
5 $title = 'function PowerShellDate '
6 $HList = [System.Collections.Generic.List[Hashtable]]::new()
7 $HList.Add(@{Name = 'Windows PowerShell 1.0'; Date = 'Nov 2006'})
8 $HList.Add(@{Name = 'Windows PowerShell 2.0'; Date = 'Jul 2009'})
9 $HList.Add(@{Name = 'Windows PowerShell 3.0'; Date = 'Oct 2012'})
10 $HList.Add(@{Name = 'Windows PowerShell 4.0'; Date = 'Oct 2013'})
11 $HList.Add(@{Name = 'Windows PowerShell 5.0'; Date = 'Feb 2016'})
12 $HList.Add(@{Name = 'Windows PowerShell 5.1'; Date = 'Aug 2016'})
13 $HList.Add(@{Name = 'PowerShell Core 6.0'; Date = 'Jan 2018'})
14 $HList.Add(@{Name = 'PowerShell Core 6.1'; Date = 'Sep 2018'})
15 $HList.Add(@{Name = 'PowerShell Core 6.2'; Date = 'Mar 2019'})
16 $HList.Add(@{Name = 'PowerShell Core 7.0'; Date = 'Mar 2020'})
17 $HList.Add(@{Name = 'PowerShell Core 7.1'; Date = 'Nov 2020'})
18 }
19 Describe "$title" {
20 It 'Returns <Date> for <Name>' -ForEach $HList {
21 Get-PowerShellDate $Name | Should -Be $Date
22 }
23 }
The variable title is set in the BeforeDiscovery block and displayed in the Describe block.
HList is a list of hashtables and is also available for all Describe and It blocks.
¹⁶https://pester.dev/docs/usage/data-driven-tests#execution-is-not-top-down
¹⁷https://pester.dev/docs/usage/discovery-and-run
¹⁸Pester Team. (2021, Sep. 12). BeforeDiscovery. Pester Docs. [Online]. Available: https://pester.dev/docs/commands/BeforeDiscovery.
[Accessed: Aug. 18, 2022].
Parameterized Testing 134
6.2.3.1 BeforeAll
BeforeAll¹⁹ is the location where you import the file that contains your function and setup before
running the tests.
It’s usually this:
1 BeforeAll {
2 . $PSCommandPath.Replace('.Tests.ps1', '.ps1')
3 }
Storing functions and tests in separate folders is a common pattern in PowerShell module
development.²⁰ You can add another Replace() method to $PSCommandPath to handle this
scenario:
1 BeforeAll {
2 . $PSCommandPath.Replace('\Tests\', '\Src\').Replace('.Tests.ps1', '.ps1')
3 }
6.2.3.2 AfterAll
AfterAll²¹ follows the same principle as BeforeAll, but it runs after the test. You usually perform
clean-up activities, such as deleting temporary files, in the AfterAll block.
BeforeEach²² and AfterEach²³ are similar in function to BeforeAll, but are only for Context or
Describe blocks. This block runs once for every It block contained within the current Context
or Describe block.²⁴
¹⁹https://pester.dev/docs/commands/BeforeAll
²⁰Pester Team. (2022, Jun. 19). File placement and naming. Pester Docs. [Online]. Available: https://pester.dev/docs/usage/file-
placement-and-naming. [Accessed: Aug. 18, 2022].
²¹https://pester.dev/docs/commands/AfterAll
²²https://pester.dev/docs/commands/BeforeEach
²³https://pester.dev/docs/commands/AfterEach
²⁴Pester Team. (2022, Apr. 20). Setup and teardown - BeforeEach. Pester Docs. [Online]. Available: https://pester.dev/docs/usage/setup-
and-teardown#beforeeach. [Accessed: Apr. 26, 2022].
Parameterized Testing 135
6.2.4 Param
As described earlier, you can use the full Param syntax in your Pester test files.²⁵ The Param block
will inherit the same validation properties, parameter sets, etc. available in scripts and functions.
This allows you to control the type of input and detect potential errors before you run any tests.
Create a new test file named Get-PSVersionTable.Tests.ps1 that contains the following:
The $ImportPath parameter is (Mandatory), must be of type [string], and must pass the
validation script {Test-Path $_}. In the BeforeAll block, the $PSVersionTableContent
variable is populated using Import-CliXml and the value of $ImportPath. The action on the
left of the It block gets the Version property of the $PSVersionTable automatic variable. The
assertion on the right of the It block compares the action’s result with a string from the imported
SemanticVersion object in $PSVersionTableContent.
You can use parameters to reuse the same test code with different actions and assertions. To
invoke test files with parameters, you’ll have to create a Pester container in version v5, or use
the -Script parameter of Invoke-Pester in v4.
²⁵Pester Team. (2022, Apr. 20). Data driven tests - Providing external data to tests. Pester Docs. [Online]. Available: https://pester
.dev/docs/usage/data-driven-tests#providing-external-data-to-tests. [Accessed: Apr. 26, 2022].
Parameterized Testing 136
²⁶Pester Team. (2021, Sep. 12). New-PesterContainer. Pester Docs. [Online]. Available: https://pester.dev/docs/commands/New-
PesterContainer. [Accessed: Apr. 26, 2022].
Parameterized Testing 137
Example 18: Running the test in PowerShell 7 with the XML file from PowerShell 5.1
1 $ContainerParams = @{
2 Path = '.\Tests\Get-PSVersionTable.Tests.ps1'
3 Data = @{ ImportPath = '.\Import\PSVersiontable5.1.xml' }
4 }
5 $container = New-PesterContainer @ContainerParams
6 Invoke-Pester -Container $container -Output Detailed
This test fails because 5.1 is different from 7.1.3. It doesn’t need two ps1 files to test both versions
as all work is done in the BeforeAll block. With New-PesterContainer you can leverage -
Path and -Data as inputs. Then, once inside the test, use -ForEach to generate many tests. This
concept is the core of this chapter.
6.2.6 PesterConfiguration
Finally, parameterized tests might generate a lot of output and these need to be handled via
[PesterConfiguration]²⁷. Pester configuration helps you manage all types of scenarios. Here
is an example running the tests from examples 17 and 18 with diagnostic verbosity:
There is also the OutputFormat option that can handle NUnitXml, NUnit2.5 or JUnitXml.
²⁷https://pester.dev/docs/usage/Configuration
Parameterized Testing 138
testResults.xml
6.3 Pester v4
If you are using Pester versions v3 or v4, you will need to use the -Script parameter.³⁰ It has two
properties, -Path and -Parameters. Here is an example for Get-PSVersionTable that works
with these Pester versions:
²⁸https://pester.dev/docs/commands/New-PesterConfiguration
²⁹Pester Team. (2021, Apr. 30). New-PesterConfiguration. Pester Docs. [Online]. Available: https://pester.dev/docs/commands/New-
PesterConfiguration. [Accessed: Apr. 26, 2022].
³⁰Pester Team. (2021, Apr. 17). Invoke-Pester - Parameters. Pester Docs. [Online]. Available: https://pester.dev/docs/v4/commands/
Invoke-Pester#-script. [Accessed: Apr. 26, 2022].
Parameterized Testing 139
1 $InvokePesterScript = @{
2 Path = 'D:\PowerShellDate\Tests\Get-PSVersionTable.Tests.ps1'
3 Parameters = @{
4 Importpath = 'D:\PowerShellDate\Tests\Import\PSVersiontable5.1.xml'
5 }
6 }
7 Invoke-Pester -Script $InvokePesterScript
Pester v4.10.1
Executing all tests in 'D:\PowerShellDate\Tests\Get-PSVersionTable.Tests.ps1'
Pester v4.10.1
Executing all tests in 'D:\PowerShellDate\Tests\Get-PSVersionTable.Tests.ps1'
True
As you can see, the idea is the same: -Parameters provides the input variables and -Path selects
the test file. You may need to adjust the syntax based on your environment.
³¹https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_splatting
Parameterized Testing 140
6.4 Outputs
With the ability to have many variables as input, your test results will also increase. Here, Get-
Service.Tests.ps1 tests if each service is currently running. Not helpful in real-life scenarios,
but practical for this example.
1 $PesterParams = @{
2 Path = '.\Tests\Get-Service.Tests.ps1'
3 Output = 'Detailed'
4 PassThru = $true
5 }
6 $ResultPester = Invoke-Pester @PesterParams
$ResultPester is a [Pester.Run] object and contains one test result for each service on your
machine.³² You could select only the ones that passed and export them to CSV:
$ResultPester.Failed |
Select-Object ExpandedName,Result,ErrorRecord,Block,ExecutedAt,Duration
With the ImportExcel³³ module, you can create an Excel report with conditional formatting to
highlight your Pass/Fail results:
³²Pester Team. (2021, Mar. 07). Pester - Run.cs. pester/Pester on GitHub. [Online]. Available: https://github.com/pester/Pester/blob/
main/src/csharp/Pester/Run.cs. [Accessed: Apr. 26, 2022].
³³https://github.com/dfinke/ImportExcel
Parameterized Testing 141
Example 24: Exporting Pester result data to an Excel spreadsheet with ImportExcel
1 $ResultData = $ResultPester.Tests |
2 Select-Object ExpandedName, Result, { $_.ErrorRecord },
3 Block, ExecutedAt, Duration
4
5 $ResultData | Export-Excel -Path .\test.xlsx -ConditionalText $(
6 $TextPARAMs = @{
7 Text = 'Passed'
8 Range = 'B:B'
9 BackgroundColor = 'Green'
10 ConditionalTextColor = 'White'
11 }
12 New-ConditionalText @TextPARAMs
13 $TextPARAMs2 = @{
14 Text = 'Failed'
15 Range = 'B:B'
16 BackgroundColor = 'Red'
17 ConditionalTextColor = 'White'
18 }
19 New-ConditionalText @TextPARAMs2
20 )
Example 25: Exporting Pester result data to an Excel spreadsheet with ImportExcel
1 param (
2 [Parameter(Mandatory)]
3 [string] $FullName
4 )
5
6 BeforeDiscovery {
7 $FileContent = Get-Content -Path $FullName
8 $HList = [System.Collections.Generic.List[hashtable]]::new()
9 $L = 0
10 $FileContent.ForEach{
11 $L++
12 $HList.Add(@{
13 LineNumber = $L
14 LineLength = $PSItem.Length
15 LineContent = $PSItem
16 })
17 }
18 }
19
20 Describe 'FileContent' -ForEach $HList {
21 Context 'Line <LineNumber>' {
22 It 'Line <LineNumber> Should be less than 100 char' {
23 $PSItem.Length | Should -BeLessThan 100
24 }
25 It "Line '<LineContent>' Should Match 'o'" {
26 $PSItem.LineContent | Should -Match 'o'
Parameterized Testing 142
27 }
28 }
29 }
Running the following will create an Excel file with the failed/passed results for each line:
1 $ContainerParams = @{
2 Path = '.\Tests\Get-FileContent.Tests.ps1'
3 Data = @{ FullName = '.\Import\psversiontable-pwsh.xml'}
4 }
5 $Container = New-PesterContainer @ContainerParams
6
7 $ResultPester = Invoke-Pester -Container $Container -Output None -PassThru
8
9 $ResultData = $ResultPester.Tests |
10 Select-Object ExpandedName, Result, { $_.ErrorRecord },
11 Block, ExecutedAt, Duration
12
13 $ResultData | Export-Excel -Path .\test.xlsx -ConditionalText $(
14 $TextPARAMs = @{
15 Text = 'Passed'
16 Range = 'B:B'
17 BackgroundColor = 'Green'
18 ConditionalTextColor = 'White'
19 }
20 New-ConditionalText @TextPARAMs
21 $TextPARAMs2 = @{
22 Text = 'Failed'
23 Range = 'B:B'
24 BackgroundColor = 'Red'
25 ConditionalTextColor = 'White'
26 }
27 New-ConditionalText @TextPARAMs2
28 )
Modify the test to accept multiple input files. The Get-FileContent.Tests.ps1 file should
look like this:
20 }
21 }
22 }
23
24 Describe 'FileContent' -ForEach $HList {
25 Context 'Line <LineNumber> from <FileName>' {
26 It 'Line <LineNumber> Should be less than 100 char' {
27 $PSItem.Length | Should -BeLessThan 100
28 }
29 It "Line '<LineContent>' Should Match 'o'" {
30 $PSItem.LineContent | Should -Match 'o'
31 }
32 }
33 }
1 $ContainerParams = @{
2 Path = '.\Tests\Get-FileContent.Tests.ps1'
3 Data = @{ FullNameList = @(
4 '.\Import\psversiontable-pwsh.xml'
5 '.\Import\psversiontable5.1.xml'
6 )}
7 }
The FullName parameter is now FullNameList and has the [string[]] type accelerator
instead of [string]. The extra brackets in [string[]] changes the parameter’s type from a
string object to a string array. In the BeforeDiscovery block, a foreach loop iterates through
each path passed to the $FullNameList parameter. The $FileName variable is populated with
the file name of the current object in the loop, and added as an additional key/value in $HList.
The FileName key is then used in the Context block’s description to distinguish between the
files under test. The rest of the test code is the same as before. You now have a test script that
can accept a list of file paths and generate tests for each line in each file.
6.6 Conclusions
With parameters, you can write test code that is dynamic and reusable with external data from
any source. Parameter values can be combined with code executed during test discovery and
the -Foreach parameter of Pester test control blocks such as Context and It that get executed
during test run. Your actions and assertions are written once, and Pester generates a separate test
for each data set passed to the tests. The results can be output in a variety of formats for further
automated or interactive processes to identity and fix errors in your code.
³⁵https://pester.dev/docs/v4/quick-start
³⁶https://pester.dev/docs/usage/data-driven-tests
³⁷https://pester.dev/docs/migrations/breaking-changes-in-v5
³⁸https://pester.dev/docs/migrations/v4-to-v5
³⁹https://pester.dev/docs/migrations/v3-to-v4
⁴⁰https://pester.dev/docs/additional-resources/articles
⁴¹https://pester.dev/docs/additional-resources/courses
⁴²https://github.com/pester/pester/
III PowerShell in Depth
“Give a person PowerShell and they will do their job. Teach a person PowerShell and they will
automate their job.” — Michael Zanatta
For most administrators, PowerShell is a simple tool used to perform simple tasks quickly.
However, when used correctly, PowerShell is a powerful automation tool that can solve complex
problems with efficiency. This section will cover advanced PowerShell concepts in code design
and refactoring, progressive conditions, logging, and Infrastructure as Code (IaC). It features
deep-dive topics such as interpolation, data management, bitwise operators, and operator
precedence.
7. Refactoring PowerShell
Refactoring code is an everyday task that improves the code’s design and implementation
structure, improving the coder’s skills as new programming concepts are introduced, learned,
and understood. This chapter explores the concepts of how to refactor your code to make it more
readable and maintainable, including:
¹Microsoft. (2021, Jun. 10). PowerShell 101: Chapter 4 - One-liners and the pipeline - Filtering Left. Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/scripting/learn/ps101/04-pipelines#filtering-left. [Accessed: May. 25, 2022].
146
Refactoring PowerShell 147
The GSF approach simplifies the required data or information before invoking it. In some
circumstances, the Where-Object cmdlet increases processing time by lacking looping controls
and executing a condition statement on each item within the array or list. The Where() method
is a suitable alternative; however, it lacks pipeline support.² Below is an example of some
PowerShell code that can be refactored using the GSF approach.
Example 2: Matching Windows process and service information using a traditional looping approach
1 #Requires -Version 5.1
2 #
3 # In this script, the Process and Windows Service
4 # data is formatted and joined:
5
6 $WindowsServices = Get-CimInstance -ClassName Win32_Service
7 $WindowsProcesses = Get-Process
8 $NewObj = @()
9
10 # Iterate through each of the processes and find Windows processes.
11 foreach ($WindowsProcess in $WindowsProcesses) {
12
13 $matchedService = $WindowsServices | ForEach-Object {
14 if ($WindowsProcess.Id -eq $_.ProcessId) { Write-Output $_ }
15 }
16
17 if ([Array]$matchedService.Count -eq 0) { continue }
18
19 $NewObj += $WindowsProcess | Select-Object *,
20 @{ Name = 'WindowsService'; Expression = { $matchedService } }
21
22 }
23
24 $NewObj | Where-Object { $_.WindowsService -ne $null } |
25 Export-Clixml ProcessesWithServiceInfo.clixml
²Microsoft. (2022, Mar. 17). About Methods (Microsoft.PowerShell.Core) - ForEach and Where methods. Microsoft Docs. [Online].
Available: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_methods#foreach-and-where-
methods. [Accessed: May. 25, 2022].
Refactoring PowerShell 148
Example 3: Refactoring the code from example 2 using grouping, sorting, and filtering
1 # Get the Items that are needed.
2 $WindowsServices = Get-CimInstance -ClassName Win32_Service
3
4 # Let's slow down here and explain each pipline step:
5
6 # Perform the Process Lookup, but since we have the Windows Services
7 # we can 'filter-left' and parse the Process id's directly into the cmdlet.
8 # This removes the Where-Object, filtering processes for only Windows Services.
9 # By utilizing filtering left, the number of objects have been reduced
10 # improving script execution performance.
11 Get-Process -Id $WindowsServices.ProcessId |
12 # We can use select expressions to attach the matched
13 # Windows Service object to the process object
14 Select-Object *, @{
15 Name = "WindowsService"
16 Expression = {
17 # We need to declare a variable here since the pipeline
18 # token is lost when piped into where-object.
19 $processId = $_.Id
20 $WindowsServices | Where-Object { $_.ProcessId -eq $processId }
21 }
22 } |
23 # Finally Export to CLIXML
24 Export-Clixml ProcessesWithServiceInfo.clixml
Seconds : 45
Milliseconds : 232
Seconds : 15
Milliseconds : 243
Notice how the first example is significantly slower than the second example (by 30 seconds).
Why? The final filtering is done after the processing is completed, whereas in the second example,
the filtering was applied before using the ‘filtering left’ technique.
• $Hashtable.NewKey to define a new key within the Hashtable. Previously, the Add()
method was used. Today, PowerShell keys are automatically created by declaring the new
key within the table (Hashtable.NewKey).
• $Hashtable.Remove() to remove a key from the Hashtable.
• $Hashtable.ExistingKey = 'Value’, to update the value for an existing key.
# Example 1:
Name Value
---- -----
Name Michael Zanatta
Occupation PowerShell Developer
Age 30
Salary $50000
Using these concepts, PowerShell code can be refactored to be simpler by applying logic to the
parameterized hashtable rather than the cmdlet itself. Typical parameters are defined initially
within the hashtable and then transformed by the logic. In the example below is some PowerShell
and some logic without splatting:
Refactoring PowerShell 150
Parameter1 = Value
Parameter2 = Value3
Parameter1 = Value
Parameter3 = Value3
Let’s expand with a detailed example. Below, several conditions are defined that need refactoring:
Refactoring PowerShell 151
Parameter4 = Value4
Parameter3 = Value3
Parameter1 = NewValue
• Declare the Hashtable with the common parameters that occur to all the items within the
cmdlet. In this instance, it’s Parameter1, which is present in all the conditions. The else
statement is removed since Parameter1 is declared (implicitly) in the Hashtable.
• Condition2 changes items in Parameter1, and adds keys Parameter3 and Parameter4.
• Condition3 removes Parameter1, and adds Parameter3.
20 # Remove Parameter1
21 $Params.Remove('Parameter1')
22 $Params.Parameter3 = 'Value3'
23 }
24
25 Do-Something @Params
Parameter4 = Value4
Parameter3 = Value3
Parameter1 = NewValue
Much better!
7.3 Interpolation
Before PowerShell, VBScript was the scripting language for Windows System Administrators.
VBScript lacked modern features, which made it difficult to use. PowerShell changed that; it was
derived from C#, inheriting the language structure, management, and syntax. In PowerShell,
string management switched from concatenation to interpolation. Interpolation is a technique
where string data is substituted into a string using different methods. This segment explores
variable substitution and string formatting.
⁵Microsoft. (2022, Mar. 19). About Operators (Microsoft.PowerShell.Core) - Subexpression operator. Microsoft Docs. [On-
line]. Available: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_operators#subexpression-
operator–. [Accessed: May. 25, 2022].
Refactoring PowerShell 153
# Example 1:
This is a string!
# Example 2:
This is a string inside a property!
# Example 3:
This is a string!
The drawbacks of using this method are increasing complexity and lack of formatting capabilities.
Three object properties are interpolated into a string in the following example. Note the
complexity of the PowerShell before and after the refactorization using the -join operator.
# Example 1:
Initial string inside a property! another string! And another!
# Example 2:
This is the initial string inside a property! another string! And another!
Variable substitution is best used for simple string interpolation. For more complex substitution,
use the -f (format) operator with composite formatting.
⁶Microsoft. (2022, Mar. 19). About Operators (Microsoft.PowerShell.Core) - Format operator. Microsoft Docs. [Online]. Avail-
able: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_operators#format-operator–f. [Ac-
cessed: May. 25, 2022].
Refactoring PowerShell 154
The format (-f) operator is a relatively unknown feature used to interpolate and format literal
and expandable strings within .NET languages.
On the left side, the string is defined with placeholders containing array indexes to associate
them. These are called format items.⁷ Format items are defined as curly braces within the string
{}, which contains the composite formatting syntax:
Note that the delimiter to separate the index and alignment is a comma (,), and the secondary
delimiter is a colon (:). On the right side is an array of items nested in a root object type (for
example, String, Int, or Char)
The format operator performs simple interpolation without the Alignment and FormatString
properties. In the following example, PowerShell performs a basic interpolation by adding
String and ! to the initial string. Note the format items and the index values ({IndexNumber})
on the left side of the string:
This is a string!
If a format item’s index value on the left side is out of range of the array on the right side, an
error is thrown:
⁷Microsoft. (2021, Nov. 20). Composite formatting. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/dotnet/s-
tandard/base-types/composite-formatting. [Accessed: May. 25, 2022].
Refactoring PowerShell 155
It’s possible to have additional array items on the right side, but unless explicitly defined within
the format item, it won’t be interpolated into the string:
This is a string!
Format items on the left side don’t need to be in order in the string. However, it’s recommended
to order the left side for readability and maintainability:
Example 14: The position of format items in the composite format string doesn’t matter
# Note that 1 and 0 are swapped around.
"This is a {1}{0}" -f 'string', '!'
This is a !string
Format item indexes can be used multiple times within the left side string:
• In Example 1, ‘value’ is aligned to the right side of the 10 character string. The placeholder
value (string 'value') is included within the alignment padding.
• Example 2 is the same as Example 1; the string is wrapped with quotes to show the start
and end of the string.
• In Example 3, ‘value’ is aligned to the left side of the 10 character string. Again, note that
the placeholder value is included within the alignment padding.
• In Example 4, quotes are wrapped around the string from Example 3 to show the start and
end of the string.
• In Example 5, note the difference between the left-hand and right-hand formatting when
used with multiple format items. Notice how the padding is relative to its own format item.
• In Example 6, note how the padding is again relative to its own format item, making it
impossible to overlap text with each other.
In the example above, the placeholder string length was smaller than the alignment value, so
what happens when it’s larger? The padding is ignored when the placeholder value string length
is equal to or larger than the alignment value.
Example 17: Placeholder values aren’t truncated when they’re longer than the alignment value
1 #
2 # Example 1
3 # The string length 'value' is larger then three characters when
4 # using the left-hand alignment.
5 $String = "'{0,3}'" -f 'value'
6 $String
7 #
8 # Example 2
9 # Let's swap sides and now try the right-hand
10 # alignment.
11 $String = "'{0,-3}'" -f 'value'
12 $String
# Example 1
'value'
# Example 2
'value'
The format string component defines what composite formatting type is to be performed on the
format item. The following table lists some custom formatting types (with examples):
Refactoring PowerShell 158
You can find out more about composite formatting at Microsoft Docs⁸.
When formatting dates, multiple date formats can be grouped into the same format string:
⁸https://learn.microsoft.com/en-us/dotnet/standard/base-types/composite-formatting
Refactoring PowerShell 160
SundayFebruary2022
This can be expanded further by adding some literal formatting within the format strings
component. In this example, a space between the format strings is added:
Example 19: Spaces are treated literally inside format string components
'{0:dddd d MMMM yyyy}' -f (Get-Date)
Strings can be wrapped within the format string component by using a literal or an expandable
string (''). By wrapping the strings within a string, special characters are ignored. If using the
same string type to escape, use double quotation marks ('' or ""). In the example below, 'Day
:', 'Month:' and 'Year:' are added to the format string. However, since wrapped in a string,
special characters are ignored:
# Example 1:
Day: Sunday 20, Month: February, Year: 2022
# Example 2:
Day: Sunday 20, Month: February, Year: 2022
# Example 3:
Day: Sunday 20, Month: February, Year: 2022
# Example 4:
Day: Sunday 20, Month: February, Year: 2022
#
# Example 2: Formatting a number
(0.42).ToString('p0')
# Example 1:
Sunday 20 February 2022
# Example 2:
42%
If two format string operators are ambiguous and need to be separated logically, use an empty
literal span:
# Example 1:
Sunday
# Example 2:
Sunday20
The advantages of using the format operator expand from interpolation to string formatting,
simplifying scripts. Some drawbacks are that the substitution and format operators are only
used for static strings, making it difficult to construct and interpolate many strings.
While the code is relatively succinct, it’s not maintainable or testable. The function is performing
several tasks, making it difficult to test. Process-File can be refactored further, splitting out the
Refactoring PowerShell 163
file extraction, the validation, and the file transformation, wrapped with an additional function
to join them together:
Example 24: Refactoring the function from Example 24 yields discrete functions for each task
1 #
2 # The First Function to Expand the File
3 function Extract-File {
4 param ($Filename)
5
6 $File = Get-Item -LiteralPath $Filename
7 $ExpandPath = Join-Path ([System.IO.Path]::GetTempPath()) (New-Guid)
8 $File | Expand-Archive -DestinationPath $ExpandPath
9
10 return $ExpandPath
11 }
12
13 #
14 # The Second Function to Test Exported Files
15 function Test-ExportedFiles {
16 param($LiteralPath, $VerifyHashes)
17
18 $GetChildItemParams = @{
19 LiteralPath = $LiteralPath
20 File = $true
21 Recurse = $true
22 }
23
24 $CompareObjectParams = @{
25 ReferenceObject = $(Get-ChildItem @GetChildItemParams |
26 Get-FileHash).Hash
27 DifferenceObject = $VerifyHashes
28 }
29
30 $Difference = Compare-Object @CompareObjectParams
31
32 if ($Difference.InputObject.Count -ne 0) { return $false }
33
34 $true
35
36 }
37
38 #
39 # The third to Append the string to the end of the file.
40 function Add-StringToEndOfFile {
41 param($FilePath, $Value)
42
43 Get-ChildItem $FilePath -File -Recurse | Add-Content -Value $Value
44
45 }
46
47 #
48 # The Final to join them together.
49 function Process-File {
50 param($Filename, $VerifyHashes)
51
52 # Extract the File to C:\Windows
53 $ExpandPath = Extract-File -Filename $Filename
54
55 # Test the files
56 $Params = @{
57 LiteralPath = $ExpandPath
58 VerifyHash = $VerifyHashes
59 }
Refactoring PowerShell 164
60
61 if (-not (Test-ExportedFiles @Params)) {
62
63 $MailMessageParams = @{
64 To = 'Helpdesk'
65 From = 'NoReply'
66 Subject = 'File Transfer Failed'
67 }
68
69 Send-MailMessage @MailMessageParams
70 throw "Issue with the hash"
71 }
72
73 #
74 # Add Test to the end of every file.
75 Add-StringToEndOfFile -FilePath $ExpandPath -Value 'Test'
76
77 return $ExpandPath
78 }
To make the code more maintainable/testable, setting the type of the parameters identifies
the exact allowed parameter types. In the following example, the parameters $FileName and
$ExtractPath are protected by adding the [String] type:
¹⁰Microsoft. (2021, Jul. 30). PowerShell Language Specification: Chapter 6 - Conversions. Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/scripting/lang-spec/chapter-06. [Accessed: May. 26, 2022].
¹¹PowerShell Team. (2007, Oct. 29). Dynamic Casting. Microsoft Dev Blogs. [Online]. Available: https://devblogs.microsoft.com/
powershell/dynamic-casting/. [Accessed: May. 26, 2022].
Refactoring PowerShell 165
Example 26: The function from Example 26, with typecast parameters
1 function Extract-File {
2 param ([String]$FileName, [String]$ExtractPath)
3
4 $File = Get-Item -LiteralPath $FileName
5 $File | Expand-Archive -DestinationPath $ExtractPath
6
7 }
¹²Microsoft. (2021, Jun. 10). About Functions Advanced (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://
learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_functions_advanced. [Accessed: May. 26, 2022].
Refactoring PowerShell 166
# Example 1:
P1
# Example 2:
P1
# Example 3:
P2
# Example 4:
P1
# Example 5:
Invoke-Something: Parameter set cannot be resolved using the specified
named parameters. One or more parameters issued cannot be used together
or an insufficient number of parameters were provided.
• In Example 1, the test shows that the default parameter set is ‘P1’ as parameters weren’t
included.
• In Example 2, the test shows the parameter set ‘P1’ is applied when -Parameter1 is
specified.
• In Example 3, the test shows the parameter set ‘P2’ is applied when -Parameter2 is
specified.
• In Example 4, -Parameter3 is parsed being present in both parameter sets ‘P1’ and ‘P2’.
Since no other parameters in the corresponding parameter set are included, PowerShell
defaults to the DefaultParameterSetName in the CmdletBinding attribute, which is ‘P1’.
• In Example 5, parameters -Parameter1 and -Parameter2 (from different parameter sets)
are parsed into the function, raising an error. The error raised demonstrates that parameters
from different parameter sets can’t mix.
Adding mandatory parameters ensures that parameter inputs are met, simplifying the internal
function code. An important thing to remember is that when using parameter sets, ensure you
set the DefaultParameterSetName argument within the CmdletBinding attribute:
When the Mandatory attribute is set, and no parameter input is found, PowerShell prompts the
user to supply values for the parameter, as seen in the example below:
Refactoring PowerShell 167
In the following example, the Mandatory attribute is used to filter the input parameters:
# Example 1:
Supply values for the following parameters:
Parameter1:
Invoke-Something: Cannot bind argument to parameter 'Parameter1'
because it is an empty string.
# Example 2:
Complete!
Notice from the example above the use of the Mandatory attribute. The error in Example 1 occurs
when an input prompt is left empty for a mandatory parameter.
Another method for simplifying the input parameters is using the Validate* Attributes to test the
input itself. This chapter focuses on the following attributes:
• ValidateSet
• ValidatePattern
• ValidateLength
• ValidateCount
• ValidateNotNull
Refactoring PowerShell 168
• ValidateNotNullOrEmpty
• ValidateScript
A notable feature of the Validate* attributes is that they stack on each parameter value, further
narrowing the input requirements. In the following example, ValidateScript and ValidateNotNul-
lOrEmpty are added. Pay attention to the script block within the ValidateScript Attribute:
# Example 1:
Invoke-Something : Cannot validate argument on parameter 'Path'.
The argument is null or empty. Provide an argument that is not null or empty,
and then try the command again.
# Example 2:
C:\Program Files\PowerShell\7
# Example 3:
Invoke-Something: Cannot validate argument on parameter 'Path'. The "
# Note! There's PowerShell Code in here!
Test-Path -Path $_ -IsValid
" validation script for the argument with value "\:BadPath?" did not
return a result of True. Determine why the validation script failed,
and then try the command again.
Note from the previous example the different errors that were raised when the string validation
failed. In the following example, the same attribute ValidateScript is used. However, note the
order of execution:
Refactoring PowerShell 169
Example 32: The execution order of multiple ValidateScript attributes is last to first
1 #
2 # Example Function
3 function Invoke-Something {
4 param(
5 # First Script Block
6 [ValidateScript({
7 Write-Host 'First'
8 $_ -ne 'D:\Temp'
9 })]
10 # Second Script Block
11 [ValidateScript({
12 Write-Host 'Second'
13 $_ -ne 'Z:\Temp'
14 })]
15 # Third Script Block
16 [ValidateScript({
17 Write-Host 'Third'
18 Test-Path -Path $_ -IsValid
19 })]
20 [string]$Path
21 )
22 $Path
23 }
24 #
25 # Parse a valid path
26 Invoke-Something -Path $PSHOME
Third
Second
First
C:\Program Files\PowerShell\7
Note how the execution order is from the bottom-up, not top-down.
Other validation attributes provide different means to simplify your logic. These are:
# Example 1:
Value1
# Example 2:
Invoke-Something: Cannot validate argument on parameter 'Parameter'.
The argument "Unknown" does not belong to the set "Value1,Value2"
specified by the ValidateSet attribute. Supply an argument that is
in the set and then try the command again.
Example 34: Limiting input to strings that match a regex pattern with the ValidatePattern attribute
1 #
2 # ValidatePattern Function
3 function Invoke-Something {
4 param(
5 # Add the first attribute
6 [Parameter(Mandatory)]
7 [ValidatePattern('^(Value)[0-9]$')]
8 [String]$Parameter
9 )
10 $Parameter
11 }
12
13 #
14 # Example 1: ValidatePattern
15 Invoke-Something -Parameter 'Value9'
16
17 #
18 # Example 2: ValidatePattern Bad Input
19 Invoke-Something -Parameter 'SomeOtherValue'
# Example 1:
Value9
# Example 2:
Invoke-Something: Cannot validate argument on parameter 'Parameter'.
The argument "SomeOtherValue" does not match the "^(Value)[0-9]$" pattern.
Supply an argument that matches "^(Value)[0-9]$" and try the command again.
¹³Microsoft. (2022, Mar. 18). About Regular Expressions (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_regular_expressions. [Accessed: May. 27,
2022].
Refactoring PowerShell 171
Example 35: Controlling input string length with the ValidateLength attribute
1 #
2 # ValidateLength Function
3 function Invoke-Something {
4 param(
5 # Add the first attribute
6 [Parameter(Mandatory)]
7 # Minimum Length is 5
8 # Maxiumum Length is 7
9 [ValidateLength(5, 7)]
10 [String]$Parameter
11 )
12 $Parameter
13 }
14
15 #
16 # Example 1: ValidateLength
17 Invoke-Something -Parameter 'Value'
18
19 #
20 # Example 2: ValidateLength. Too Short
21 Invoke-Something -Parameter 'Two'
22
23 #
24 # Example 3: ValiadateLength. Too Long
25 Invoke-Something -Parameter 'StringisTooLong'
# Example 1:
Value
# Example 2:
Invoke-Something: Cannot validate argument on parameter 'Parameter'.
The character length (3) of the argument is too short.
Specify an argument with a length that is greater than or equal to "5",
and then try the command again.
# Example 3:
Invoke-Something: Cannot validate argument on parameter 'Parameter'.
The character length of the 15 argument is too long.
Shorten the character length of the argument so it is fewer than or
equal to "7" characters, and then try the command again.
Example 36: Limiting the input array size with the ValidateCount attribute
1 #
2 # ValidateCount Function
3 function Invoke-Something {
4 param(
5 # Add the first attribute
6 [Parameter(Mandatory)]
7 # Minimum Length is 5
8 # Maxiumum Length is 7
9 [ValidateCount(5, 7)]
10 # Note the type of the string
11 [String[]]$Parameter
12 )
13 $Parameter -join ' '
14 }
Refactoring PowerShell 172
15
16 #
17 # Example 1: Count
18 Invoke-Something -Parameter 1, 2, 3, 4, 5
19
20 #
21 # Example 2: ValidateLength. Too Short
22 Invoke-Something -Parameter 1
23
24 #
25 # Example 3: ValiadateLength. Too Long
26 Invoke-Something -Parameter 1, 2, 3, 4, 5, 6, 7, 8
# Example 1:
1 2 3 4 5
# Example 2:
Invoke-Something: Cannot validate argument on parameter 'Parameter'.
The parameter requires at least 5 value(s) and no more than 7 value(s) -
1 value(s) were provided.
# Example 3:
Invoke-Something: Cannot validate argument on parameter 'Parameter'.
The parameter requires at least 5 value(s) and no more than 7 value(s) -
8 value(s) were provided.
• [ValidateScript({ #ScriptBlock })]. Use a PowerShell script block to test the input:
The validation process is similar to the script block you use with Where-Object, where
the $_ token denotes the current parameter, and the script block needs to return a Boolean
value. Other parameter values aren’t accessible inside the script block. It’s tempting to write
comprehensive code but keep it as simple as possible since it can be challenging to test:
Example 37: Validating an input using a script block with the ValidateScript attribute
1 #
2 # ValidateScript Function
3
4 function Invoke-Something {
5 param(
6 # Add the first attribute
7 [Parameter(Mandatory)]
8 # In this example, we test the input
9 # for a valid filepath
10 [ValidateScript({
11 # Access the current parameter by
12 # using the $_ token:
13 Test-Path -LiteralPath $_
14 })]
15 [String]$Parameter
16 )
17 $Parameter
18 }
19
20 #
21 # Example 1: Using a known file path.
22 Invoke-Something -Parameter "$PSHOME"
23
24 #
25 # Example 2: Using an unknown file path.
26 Invoke-Something -Parameter 'D:\Bad\FilePath'
Refactoring PowerShell 173
#
# Example 1:
C:\Program Files\PowerShell\7
#
# Example 2:
Invoke-Something: Cannot validate argument on parameter 'Parameter'. The "
# Access the current parameter by
# using the $_ token:
Test-Path -LiteralPath $_
" validation script for the argument with value "D:\Bad\FilePath"
did not return a result of True. Determine why the validation script
failed, and then try the command again.
Example 38: This function shouldn’t output file content since it’s named with the ‘Test’ verb
1 function Test-FileContents {
2 param([String]$LiteralPath)
3
4 $Content = Get-Content -LiteralPath $LiteralPath -Raw
5 if ($Content.Length -eq 0) { return $false }
6 $Content
7 }
The verb ‘Test’ shouldn’t be returning the contents of the file. Instead, it should return a boolean
object type. The code is refactored to return a boolean result in the following example:
If you’re returning the file content, the ‘Get’ verb is more appropriate.
¹⁴Microsoft. (2022, Jan. 02). Approved Verbs for PowerShell Commands. Microsoft Docs. [Online]. Available: https://learn
.microsoft.com/en-us/powershell/scripting/developer/cmdlet/approved-verbs-for-windows-powershell-commands. [Accessed: May. 26,
2022].
¹⁵https://learn.microsoft.com/en-us/powershell/scripting/developer/cmdlet/approved-verbs-for-windows-powershell-commands
Refactoring PowerShell 174
Example 40: A function that outputs different data types based on file contents
1 function Import-FileasXML {
2 param([String]$LiteralPath)
3
4 $Content = Get-Content -LiteralPath $LiteralPath
5
6 try {
7 $Content = [XML]$Content
8 } catch {}
9
10 $Content
11 }
This function can be refactored by splitting out the object functionality into two separate
properties (Content and XMLContent):
Example 41: The refactored function always returns a custom object with two properties
1 function Import-FileasXML {
2 param([String]$LiteralPath)
3
4 $Content = Get-Content -LiteralPath $LiteralPath
5
6 [PSCustomObject]@{
7 Content = $Content
8 XMLContent = try { [XML]$Content } catch { $null }
9 }
10 }
The previous example shows that the file content was loaded initially and then added to
a [PSCustomObject] with two properties: Content and XMLContent. Within the property
XMLContent, the file’s contents are tentatively parsed as an XML file. If successful, the XML
object is added to the XMLContent property; otherwise, $null is returned.
if ($Condition1) {
if ($Condition2) {
if ($Condition3) {
# Do Something
}
}
}
Suppose the conditions are singular, meaning that there aren’t other actions of dependencies on
the condition. In that case, each of the conditions can be grouped by using parentheses and the
-and operator:
If the condition isn’t singular, this makes it a lot trickier; however, it can be refactored with the
use of the elseif statement. The example below demonstrates the use case where each condition
isn’t singular and contains an alternate execution path after each condition:
Example 43: The same logic as Example 43, using a more linear structure
1 if (($Condition1) -and ($Condition2) -and ($Condition3)) {
2 # Do Something
3 }
4 elseif (-not ($Condition1)) {
5 # [First Condition] Do Something Else
6 }
7 elseif (($Condition1) -and (-not ($Condition2))) {
8 # [Second Condition] Do Something Else
9 }
10 elseif (($Condition1) -and ($Condition2) -and (-not ($Condition3))) {
11 # [Third Condition] Do Something Else
12 }
The nested code can be flattened by grouping the conditions, but notice (from the previous
example) that it’s not readable or maintainable. So how can this be taken further? First, questions
need to be asked about the design of the script:
In this example, the code has been optimized, and the conditions depend on each other. This
presents two options:
In the example, all irrelevant conditions are removed. In this case, the nested conditions
validating the $Condition1 and $Condition2 are removed. Why? Within the elseif state-
ment, each previous condition is implicitly $true based on the previous condition. The first
condition is testing for success. If the first condition fails, either $Condition1, $Condition2,
or $Condition3 is $false. The subsequent elseif conditions are then explicitly ordered based
on the nesting in the original example ($Condition1 → $Condition2 → $Condition3).
Refactoring PowerShell 177
...
}
elseif (-not ($Condition1)) {
# [First Condition] Do Something Else
}
elseif (($Condition1) -and (-not ($Condition2))) {
# [Second Condition] Do Something Else
}
In the first elseif statement, the code tests if $Condition1 is $false. In the second statement,
the code tests if $Condition1 is $true and condition2 is $false. However, the assumption is
made:
In the first elseif statement, condition1 is being tested for $false, making it implicitly $true
for subsequent statements.
This removes the requirements to test the aforementioned conditions again since it’s not $true.
There is a risk here that the outcomes are affected since they depend on the ordering within
the elseif statement. To mitigate this, write unit tests to test for variations and document the
ordering within the code.
PowerShell loop statements can become unwieldy when multiple statements are nested, increas-
ing the complexity of the code and associated unit tests with it. Loop statements shouldn’t be
nested more than three times without clear loop-controls (using continue and break).¹⁶ ¹⁷ ¹⁸
Each nested loop statement increases the number of loops, multiplying the number of possibili-
ties.
Consider the following example:
Example 45: Nested looping constructs increase the number of operations exponentially
1 $Counter = 0
2 for ($i = 0; $i -ne 10; $i++) {
3 for ($x = 0; $x -ne 10; $x++) {
4 $Counter++
5 Write-Host "$Counter"
6 }
7 }
¹⁶Microsoft. (2014, May. 08). PowerShell Looping: Advanced Break. Microsoft Dev Blogs. [Online]. Available: https://devblogs
.microsoft.com/scripting/powershell-looping-advanced-break/. [Accessed: May. 26, 2022].
¹⁷Microsoft. (2022, Mar. 03). About Break (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://learn.microsoft
.com/en-us/powershell/module/microsoft.powershell.core/about/about_break. [Accessed: May. 26, 2022].
¹⁸Microsoft. (2022, Mar. 19). About Continue (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://learn.microsoft
.com/en-us/powershell/module/microsoft.powershell.core/about/about_continue. [Accessed: May. 26, 2022].
Refactoring PowerShell 178
1
2
3
...
98
99
100
The previous statement contains two nested statements that both count to 10. However, if an
additional nested loop statement is added, it’s now (10x10x10) 1000:
Example 46: Nested looping constructs increase the number of operations exponentially
1 $Counter = 0
2 for ($i = 0; $i -ne 10; $i++) {
3 for ($x = 0; $x -ne 10; $x++) {
4 for ($z = 0; $z -ne 10; $z++) {
5 $Counter++
6 Write-Host "$Counter"
7 }
8 }
9 }
1
2
3
...
998
999
1000
Sometimes nested looping statements are inevitable—a necessary evil; however, they can be
refactored. These are the following rules to apply:
1. Cmdlet parameters. Check the cmdlet parameter object type to see if it supports arrays be-
ing parsed. For example, the following loop statement individually runs Invoke-Command
on a number of remote computers:
Example 47: Running Invoke-Command many times inside a loop is inefficient
1 $Computers = Get-ADComputer -Filter *
2 foreach ($Computer in $Computers) {
3 $params = @{
4 ComputerName = $Computer
5 ScriptBlock = { Write-Host "Hello World" }
6 }
7 Invoke-Command @params
8 }
This code can be refactored since the -ComputerName Parameter accepts arrays and the
-AsJob parameter is also present. The -AsJob parameter (in this use case) defers the
execution of PowerShell for each computer into PowerShell jobs.¹⁹ This simplifies and
improves the performance at the same time since you can parse the entire array, removing
the loop statement as well as deferring all the execution for each computer into a PowerShell
job:
¹⁹Microsoft. (2022, Mar. 18). About Jobs (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://learn.microsoft
.com/en-us/powershell/module/microsoft.powershell.core/about/about_jobs. [Accessed: May. 27, 2022].
Refactoring PowerShell 179
Example 48: Running Invoke-Command once with an array of computers is more efficient
1 $Computers = Get-ADComputer -Filter *
2 $Params = @{
3 ComputerName = $Computers
4 ScriptBlock = {
5 Write-Host 'Hello-World'
6 }
7 AsJob = $true
8 }
9 Invoke-Command @Params | Wait-Job
From the example above, you can use Wait-Job to pause execution until all the PowerShell
jobs have been completed.
2. Use Where-Object to filter content. Where-Object can be used to perform the final
condition by removing a nested statement. In the following example, a list of users is
returned if one of the group’s distinguished names (that they’re a member of) contains
‘test’:
This can be refactored by reducing the number of nested statements with the use of Where-
Object:
Notice how the nested loop is removed? It’s not, really. Where-Object has just simplified
it.
3. Consider the use case situations for a nested loop. Either remove the loop or abstract it away
into a cmdlet. Nested statements can be filtered using Where-Object. However, when a
left-hand and right-hand comparison is needed, simplify the list into an array and use the
Compare-Object cmdlet. For example, consider the following comparison code comparing
two byte arrays:
Example 51: Manually comparing two byte arrays from both directions
1 # Strings
2 $ByteArray1 = [System.Text.Encoding]::UTF8.GetBytes('Hello World1')
3 $ByteArray2 = [System.Text.Encoding]::UTF8.GetBytes('Hello World2')
4 # Note that $ByteArray1 and ByteArray2 are arrays.
5 # See: Output1 in 'Output'
6 #
7 # Output 1:
8 $ByteArray1 -join ' '
9 $ByteArray2 -join ' '
10 # While we can compare strings, for this demo
11 # we're assuming we're comparing byte strings.
12
13 # Compare the left
14 $LeftDifference = @()
15 for ($i = 0; $i -ne $ByteArray1.Count; $i++) {
16 if ($ByteArray1[$i] -ne $ByteArray2[$i]) {
17 $LeftDifference += [pscustomobject]@{
18 Index = $i
19 Pos = 'left'
20 ValueLeft = $ByteArray1[$i]
21 ValueRight = $ByteArray2[$i]
22 }
23 }
24 }
25
26 # Compare the right
27 $RightDifference = @()
28 for ($i = 0; $i -ne $ByteArray2.Count; $i++) {
29 if ($ByteArray1[$i] -ne $ByteArray2[$i]) {
30 $RightDifference += [pscustomobject]@{
31 Index = $i
32 Pos = 'right'
33 ValueLeft = $ByteArray1[$i]
34 ValueRight = $ByteArray2[$i]
35 }
36 }
37 }
38
39 #
40 # Output 2:
41 $LeftDifference
42 $RightDifference
Refactoring PowerShell 181
#
# Output 1
72 101 108 108 111 32 87 111 114 108 100 49
72 101 108 108 111 32 87 111 114 108 100 50
#
# Output 2
Index Pos ValueLeft ValueRight
----- --- --------- ----------
11 left 49 50
11 right 49 50
InputObject SideIndicator
----------- -------------
50 =>
49 <=
24 }
25 # Send an Email Confirming the Changes
26 $Params = @{
27 To = '[email protected]'
28 From = '[email protected]'
29 Subject = 'Report'
30 }
31 Send-MailMessage @Params
Below are guidelines describing the order of grouping that’s used to refactor the code to be more
readable and maintainable:
1. Group top-level code together. Top-level code means top-level script items such as parame-
ters and global variables, classes, functions, pre/post execution, and the main code segment.
The order of grouping is:
2. Group related code by inserting blank (empty) lines. Adding blank lines to separate code
provides ‘physical’ separation of code, similar to how paragraphs group common points or
ideas.
3. Group common actionable items together. Everyday actionable items can be things that
share the same objective or have the same/similar logic. It’s essential when grouping not
to break the logic flow. This approach works well with the Grouping, Sorting, and Filtering
approach mentioned in the Expanding on the Pipeline section.
By applying these guidelines, the code becomes more readable and maintainable.
#
# Header 1:
#
#
# Header 2:
# Comment:
The heading structure follows the same waterfall heading design in Microsoft Word. The three
hashes denote top-level headers. Secondary headers are denoted by two hashes, with the blank
hash above, and finally, the text body is a standard comment.
Consider the following PowerShell code:
Example 55: Code comments with a single heading level can be confusing as the importance of each comment is
unknown
1 # Random Function
2 function Get-Something {
3 Get-Random
4 }
5
6 # Importing CSV
7 $CSVContent = Import-Csv -LiteralPath 'Example.csv'
8 $CSVContent2 = Import-Csv -LiteralPath 'Example2.csv'
9
10 # Creating temporary directories
11 $Temp = New-Item -ItemType Directory -Name (New-Guid)
12 $Temp2 = New-Item -ItemType Directory -Name (New-Guid)
13
14 # Filtering CSV with another Object
15 $Filtered = $CSVContent | Where-Object { $_.Destination -eq 'Temp' }
16 $Filtered2 = $CSVContent2 | Where-Object { $_.Destination -eq 'Temp' }
17
18 # Process some changes
19 $Filtered | ForEach-Object {
20 Copy-Item -Path ($_.Source) -Destination $Temp
21 }
22 # Process some other changes
23 $Filtered2 | ForEach-Object {
24 Copy-Item -Path ($_.Source) -Destination $Temp2
25 }
26
27 # Send an Email Confirming the Changes
28 $Params = @{
29 To = '[email protected]'
30 From = '[email protected]'
31 Subject = 'Report'
32 }
33 Send-MailMessage @Params
The PowerShell code is well-documented and grouped; however, it’s unclear. You can segment
the groupings by adding multiple headers into the code:
Refactoring PowerShell 185
Example 56: Using comments with three heading levels clarifies the priority of each comment
1 #
2 # Functions
3 #
4
5 # Random Function
6 function Get-Something {
7 Get-Random
8 }
9
10 #
11 # Main: Process CSV Files and Send an Email
12 #
13
14 #
15 # Importing CSV
16 $CSVContent = Import-Csv -LiteralPath 'Example.csv'
17 $CSVContent2 = Import-Csv -LiteralPath 'Example2.csv'
18
19 #
20 # Creating temporary directories
21 $Temp = New-Item -ItemType Directory -Name (New-Guid)
22 $Temp2 = New-Item -ItemType Directory -Name (New-Guid)
23
24 #
25 # Filtering CSV with another Object
26 $Filtered = $CSVContent | Where-Object { $_.Destination -eq 'Temp' }
27 $Filtered2 = $CSVContent2 | Where-Object { $_.Destination -eq 'Temp' }
28
29 #
30 # Process Changes:
31
32 # Process some changes
33 $Filtered | ForEach-Object {
34 Copy-Item -Path ($_.Source) -Destination $Temp
35 }
36 # Process some other changes
37 $Filtered2 | ForEach-Object {
38 Copy-Item -Path ($_.Source) -Destination $Temp2
39 }
40
41 #
42 # Send an Email Confirming the Changes
43 $Params = @{
44 To = '[email protected]'
45 From = '[email protected]'
46 Subject = 'Report'
47 }
48 Send-MailMessage @Params
Notice how easy the code is to follow with some headers describing the actions. Headers also
apply to multiline comments as well, and body comments don’t require a hash:
Refactoring PowerShell 186
It’s essential to provide end user documentation of a script and/or public function using
PowerShell’s comment-based help. Within complex scripts, it’s best to have the documentation
at the top of the script or function, since it also assists code review, documentation, and code
readability.
More information on PowerShell’s comment-based help and Get-Help can be found at Microsoft
Docs²⁰.
#region <name>
Get-Random
#endregion (optional)<name>
Code regions (region folding) are groupings of code that ‘interact’ (fold/collapse) within an IDE
editor.²¹ It provides a means of folding/collapsing groupings of code, aiding in the readability
and maintainability of code. Regions are denoted by the #region comment, some code, and the
#endregion comment. Regions can be nested as well, enabling finer control over groupings.
²⁰https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_comment_based_help
²¹Microsoft. (2014, Nov. 12). Use Regions for PowerShell Comments. Microsoft Dev Blogs. [Online]. Available: https://devblogs
.microsoft.com/scripting/use-regions-for-powershell-comments/. [Accessed: May. 26, 2022].
Refactoring PowerShell 187
It’s always recommended to be explicit and define region names to prevent possible IDE
issues.
Example 59: Regions work alongside comment heading levels and code groupings to aid reading
1 #
2 # Functions
3 #
4 #region Functions
5
6 # Random Function
7 #region Function-Something
8 function Get-Something {
9 Get-Random
10 }
11 #endregion Function-Something
12
13 #endregion Functions
14 #
15 # Main: Process CSV Files and Send an Email
16 #
17 #region Main
18
19 #
20 # Importing CSV
21 $CSVContent = Import-Csv -LiteralPath 'Example.csv'
22 $CSVContent2 = Import-Csv -LiteralPath 'Example2.csv'
23
24 #
25 # Creating temporary directories
26 $Temp = New-Item -ItemType Directory -Name (New-Guid)
27 $Temp2 = New-Item -ItemType Directory -Name (New-Guid)
28
29 #
30 # Filtering CSV with another Object
31 $Filtered = $CSVContent | Where-Object { $_.Destination -eq 'Temp' }
32 $Filtered2 = $CSVContent2 | Where-Object { $_.Destination -eq 'Temp' }
33
34 #
35 # Process Changes:
36
37 # Process some changes
38 $Filtered | ForEach-Object {
39 Copy-Item -Path ($_.Source) -Destination $Temp
40 }
41 # Process some other changes
42 $Filtered2 | ForEach-Object {
43 Copy-Item -Path ($_.Source) -Destination $Temp2
44 }
45
46 #
47 # Send an Email Confirming the Changes
48 $Params = @{
49 To = '[email protected]'
50 From = '[email protected]'
51 Subject = 'Report'
52 }
53 Send-MailMessage @Params
54 #endregion Main
Refactoring PowerShell 188
Within a compatible IDE, these regions can be collapsed, hiding the blocks of code they contain:
Example 60: The code from Example 60 with collapsed regions as seen in an IDE
#
# Functions
#
#region Functions ···
#endregion Functions
#
# Main: Process CSV Files and Send an Email
#
#region Main ···
#endregion Main
While regions are handy, there can be too much of a good thing. As a rule of thumb, use regions
to group ‘top-level’ items, such as functions, classes and large code-block groupings. Low-level
regions don’t add anything and can add complexity to the code and comment structure.
Example 61: Code without a clear execution and logic flow is difficult to maintain
1 # Test the files to make sure that they exist
2 if ('SomeCSVPath', 'SomeCLIXMLPath', 'SomeOtherCLIXMLPath' | Test-Path) {
3 # Import the File
4 $CSVFile = Import-Csv -LiteralPath 'SomeCSVPath'
5 # Import the CLIXML
6 if ($someCondition) {
7 if ($anotherCondition) {
8 $CLIXML = Import-Clixml -LiteralPath 'SomeCLIXMLPath'
9 }
10 else {
11 $CLIXML = Import-Clixml -LiteralPath 'SomeOtherCLIXMLPath'
12 }
13 Write-Host "Completed!"
14 Write-Output $CLIXML
15 }
16 else {
17 throw "Condition Failed. Stopping"
18 }
19 }
The PowerShell code is functional; however, the logic flow of this code is difficult to follow. The
code doesn’t follow a natural flow; the success output is inside a nested if statement. While this
is a simple point, lots of developers make this mistake, and the effect is immediate, producing
Refactoring PowerShell 189
black box code that everyone (including the author) will struggle to reread in a few months. Logic
should be a structured model similar to a waterfall, where inputs via parameters are parsed into
the top, and the process executes and falls to the bottom. The advantage of using this design
is that it’s readable and maintainable three months from deployment. Why? People generally
read a page from top to bottom when reading a book; that’s how you should write code. It’s that
simple.
To do this, you need to use some techniques to refactor the code. These are:
Using these techniques, the code can be refactored to follow a waterfall design:
Example 62: The code from Example 62 with clearer execution and logic flows
1 # Test if the Paths Exist. If not, return to the caller
2 if (-not ('SomeCSVPath', 'SomeCLIXMLPath', 'SomeOtherCLIXMLPath' | Test-Path)) {
3 return
4 }
5
6 # Import the CSV File
7 $CSVFile = Import-Csv -LiteralPath 'SomeCSVPath'
8
9 # Invert $someCondition
10 if (-not ($someCondition)) {
11 throw "Condition Failed. Stopping"
12 }
13
14 <#
15 We can use splatting to simplify the output here.
16 Multiple examples are used to demonstrate different
17 means of refactoring the code. These are:
18
19 1. Standard Execution (used in the example)
20 2. Subexpression (shown below)
21 3. Ternary operator (shown below)
22
23 #>
24 $Params = @{
25 LiteralPath = 'SomeCLIXMLPath'
26 }
27
28 # Standard Execution
29 if (-not ($anotherCondition)) { $Params.LiteralPath = 'SomeOtherCLIXMLPath' }
30
31 # Splat it in!
32 Import-Clixml @Params
You can use subexpressions within hashtable constructs to select values dynamically:
²²Microsoft. (2022, Mar. 19). About Splatting (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://learn.microsoft
.com/en-us/powershell/module/microsoft.powershell.core/about/about_splatting. [Accessed: May. 25, 2022].
Refactoring PowerShell 190
$Params = @{
LiteralPath = $(
if ($anotherCondition) { 'SomeCLIXMLPath' }
else { 'SomeOtherCLIXMLPath' }
)
}
$Params = @{
LiteralPath = ($anotherCondition) ? 'SomeCLIXMLPath' : 'SomeOtherCLIXMLPath'
}
The preferred example in the demo is the ternary operator, since a singular value is returned
from a condition. However, the logic is much easier to follow from the previous example. This
waterfall design is implicitly true. By implicitly ‘true’, it means filtering out all the conditions
that aren’t required, cascading towards the result qualifying as ‘true’.
Function Execution
²³Microsoft. (2022, Mar. 19). About Operators (Microsoft.PowerShell.Core) - Ternary operator. Microsoft Docs. [Online]. Avail-
able: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_operators#ternary-operator–if-true–
if-false. [Accessed: May. 26, 2022].
²⁴Microsoft. (2022, Mar. 18). About If (Microsoft.PowerShell.Core) - Using the ternary operator syntax. Microsoft Docs. [On-
line]. Available: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_if#using-the-ternary-op-
erator-syntax. [Accessed: Jun. 04, 2022].
Refactoring PowerShell 191
The waterfall approach also applies to implicitly ‘false’ function flows. These flows invert the
logic, where the bottom of the function is ‘false’, and the filtering conditions are true. An example
would be to write a function to perform a number of file checks, which would qualify a ‘True’
result. After the cascade ends, the implicit result would be ‘false’.
Another example of using an implicitly ‘false’ approach in the wild is to compare this approach
against network IP firewalls. As the packet is evaluated within the IP tables (firewall rules), it
falls down through all the source/destination rules. If no rules are valid, the packet falls into a
‘global block’ (at the bottom), where packets are dropped. This approach makes it a lot easier to
trace issues since they’re ‘allowlisting’ a limited set of inputs. Everything else is dropped.
7.6.1 JSON
JSON (JavaScript Object Notation) is an open standard format that uses human-readable text to
store and transmit object information. JSON natively encodes scalars (base object types) such as
strings, integers, lists, arrays and associative arrays (dictionary or hashtable). JSON is commonly
used in the HTTP body content type for REST APIs and also for application configuration files.
To serialize and deserialize JSON, use the ConvertTo-Json and ConvertFrom-Json cmdlets.
Please note these cmdlets don’t directly serialize or deserialize to a file, so they need to be piped
from Get-Content or to Set-Content.
Refactoring PowerShell 192
Example 64: Data can only be deserialized from or serialized to JSON strings natively
1 #
2 # Read from a File
3 Get-Content -Path 'demo.json' | ConvertFrom-Json
4 #
5 # Write to a File
6 $PSObject | ConvertTo-Json | Set-Content -Path 'demo.json'
It’s crucial to remember that when using Invoke-RestMethod, the request body needs to be
serialized into JSON. The response is automatically deserialized into a [PSCustomObject] if the
Content-Type header is set within the request header (by default, the content type implicitly
set). Invoke-WebRequest doesn’t deserialize the response body since it’s used to interact with
web pages or web services rather than explicit REST services. ConvertTo-Json can serialize all
object types and ConvertFrom-Json only deserializes JSON into a [PSCustomObject].
Example 65: Serializing and deserializing JSON data, and data type limitations
1 #
2 # Example 1: Serialize as JSON
3 $Object = [PSCustomObject]@{
4 Property = 'Value'
5 AnotherProperty = 'AnotherValue'
6 }
7 # Store the output into a variable
8 $JSONString = $Object | ConvertTo-Json
9 $JSONString
10 #
11 # Example 2: Deserialize JSON
12 $JSONString | ConvertFrom-Json
13
14 # Example 3: Explore Object Type
15 $Process = Get-Process | Select-Object -First 1
16 $Process.GetType() | Select-Object Name, BaseType
17 #
18 # Example 4: Using the Process Variable, let's serialize it
19
20 $JSONString = $Process | ConvertTo-Json
21 $JSONString.Length
22 #
23 # Example 5: Deserialize $JSONString and check the Object Type
24 ($JSONString | ConvertFrom-Json).GetType() | Select-Object Name, BaseType
Refactoring PowerShell 193
# Example 1:
{
"Property": "Value",
"AnotherProperty": "AnotherValue"
}
# Example 2:
Property AnotherProperty
-------- ---------------
Value AnotherValue
# Example 3:
Name BaseType
---- --------
Process System.ComponentModel.Component
# Example 4:
5722
# Example 5:
Name BaseType
---- --------
PSCustomObject System.Object
7.6.2 YAML
YAML (Yet Another Markup Language) is a markup language that’s used to store configuration
for applications. It’s used by configuration management solutions (Ansible and DSCDatum) to
store complex machine configurations in a human-readable syntax. YAML natively encodes
scalars (base object types) such as strings, integers, lists, arrays and associative arrays (dictionary
or hashtable). It’s simpler than JSON, using a Python-style indentation to represent nesting. It’s
also more compact, using square brackets to denote lists and curly braces for associative arrays.
PowerShell has no native YAML serialization and deserialization (compared to JSON), so third-
party modules are used. The module powershell-yaml²⁵ from the PowerShell Gallery is used in
the following examples.
²⁵https://www.powershellgallery.com/packages/powershell-yaml
Refactoring PowerShell 194
# Example 1:
Property: Value
AnotherProperty: AnotherValue
# Example 2:
Name Value
---- -----
AnotherProperty AnotherValue
Property Value
# Example 3:
Name BaseType
---- --------
Hashtable System.Object
7.6.3 XML
XML (eXtensible Markup Language) is a markup language format for storing and transmitting
object data. XML is designed to be ‘semi-human’ and machine-readable. XML has largely been
superseded by newer markup languages (such as JSON and YAML). XML can be parsed natively
within PowerShell. XML is loaded by reading the XML content as a string and then typecasting
the string to the [XML] class.
Refactoring PowerShell 195
xml resources
--- ---------
version="1.0" encoding="utf-8" resources
You can access the raw serialized string using the OuterXml property:
XML’s key advantage over newer markup languages is its native ability to use XPATH to search
the XML dataset. There are two methods of searching XML, using the object methods and using
Select-Xml:
# Example 1:
to from type body
-- ---- ---- ----
Michael George Event Meet you at the park.
# Example 2:
#text
-----
Michael
# Example 1:
Node Path Pattern
---- ---- -------
to InputStream //to
# Example 2:
#text
-----
Michael
7.6.4 CSV
CSV (Comma-Separated Values) is a standard for storing simple data structures (containing rows
and columns) using the comma ‘,’ as a delimiter. Other delimiter-separated values (DSV) are
also common, and in PowerShell, the delimiter is adjustable. A drawback of CSV is that objects
with nested objects can’t be serialized, limiting the scope of the data. For more complex data
storage, CLIXML is preferred. In PowerShell, files (Import-Csv and Export-Csv) and strings
(ConvertTo-Csv and ConvertFrom-Csv) are used to serialize and deserialize CSV and DSV data.
39 [PSCustomObject]@{
40 Property = 'Second Value'
41 AnotherProperty = 'Second Another Value'
42 }
43 )
44 $ObjectList | ConvertTo-Csv -Delimiter '>'
45 #
46 # Example 7: De-Serializing Using a Different Delimiter
47 $String = @'
48 "Property">"AnotherProperty"
49 "Value">"Another Value"
50 "Second Value">"Second Another Value"
51 '@
52 $String | ConvertFrom-Csv -Delimiter '>'
# Example 1:
"Property","AnotherProperty"
"Value","Another Value"
"Second Value","Second Another Value"
# Example 2:
"ExitCode","ExitTime","EnableRaisingEvents","Handles","Handle","HasExited", ...
"HandleCount","Id","Name"
,,"False","88",,,"88","6996","AggregatorHost"
,,"False","550","1908","False","550","33056","ApplicationFrameHost"
# Example 3:
Id Name Handles
-- ---- -------
6996 AggregatorHost 88
33056 ApplicationFrameHost 550
# Example 4:
Property AnotherProperty
-------- ---------------
Value Another Value
Second Value Second Another Value
# Example 5:
"Property">"AnotherProperty"
"Value">"Another Value"
"Second Value">"Second Another Value"
# Example 6:
Property AnotherProperty
-------- ---------------
Value Another Value
Second Value Second Another Value
7.6.5 CLIXML
Did you know? CLIXML is the data type that’s used to transmit the PowerShell Session
content when using PowerShell Remoting.
For complex objects that aren’t a base type, PowerShell won’t attempt to typecast them and
they’re placed in a PSObject.
Serializing PowerShell secure strings²⁶ as CLIXML can be problematic since it uses the
Windows DPAPI (user-based) to encrypt the secure string. If another user or machine
deserializes the CLIXML, the process fails.
# Example 1:
(no output)
# Example 2:
PSCustomObject
# Example 3:
Property AnotherProperty
-------- ---------------
Value AnotherValue
# Example 4:
PSCustomObject
²⁶https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.security/convertto-securestring
Refactoring PowerShell 200
# Example 5:
<Objs Version="1.1.0.1"
xmlns="http://schemas.microsoft.com/powershell/2004/04">
<Obj RefId="0">
<TN RefId="0">
<T>System.Management.Automation.PSCustomObject</T>
<T>System.Object</T>
</TN>
<MS>
<S N="Property">Value</S>
<S N="AnotherProperty">AnotherValue</S>
</MS>
</Obj>
</Objs>
In the example listed, a PSCustomObject was serialized and deserialized, preserving the object
type. In the following example, a complex object is used:
# Example 1:
(no output)
# Example 2:
Process
# Example 3:
NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName
------ ----- ----- ------ -- -- -----------
0 0.00 39.09 0.29 89 78 node
# Example 4:
PSObject
In the example, the initial object type was [Process]; however, after exporting and importing
from CLIXML, it’s now a [PSObject].
Remember, complex objects won’t retain their object types after serialization/deserial-
ization.
Refactoring PowerShell 201
• Use known data structures such as XML, JSON, YAML, CSV, and CLIXML. Never create a
markup language unless it’s necessary.
• Try always to serialize/deserialize using out-of-the-box or native modules.
• Never create or adjust data structures by changing the serialized string.
• Use JSON and YAML to store simple human-readable data. Try not to use XML; JSON and
YAML are better suited.
• Use CSV, CLIXML, and SQL for non human-readable data structures. Use CSV for simple
data sheets, CLIXML for complex nested objects, and SQL for large complex object
structures.
• Use PowerShell Secrets Management to store passwords and avoid exporting secure strings
in CLIXML.
²⁷https://techcommunity.microsoft.com/t5/microsoft-365-pnp-blog/introduction-to-json/ba-p/2049369
²⁸https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_quoting_rules
²⁹https://devblogs.microsoft.com/scripting/understanding-advanced-functions-in-powershell/
³⁰https://devblogs.microsoft.com/scripting/introduction-to-advanced-powershell-functions/
³¹https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_functions_advanced_parameters
³²https://learn.microsoft.com/en-us/powershell/scripting/developer/cmdlet/cmdlet-attributes
³³https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/export-clixml
³⁴https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/import-clixml
³⁵https://learn.microsoft.com/en-us/dotnet/standard/base-types/composite-formatting
³⁶https://learn.microsoft.com/en-us/powershell/scripting/lang-spec/chapter-06
8. Advanced Conditions
PowerShell has several advanced conditions available to you for refactoring your code. You’ll
use familiar conditional operators in most of your scripts, but there are more advanced features
you may not have explored in depth. This chapter will explore the following topics:
202
Advanced Conditions 203
# Example 1:
True
# Example 2:
True
PowerShell also features case-sensitive operators with names having a ‘c’ prefix.
# Example 1:
False
# Example 2:
False
The following table lists each case-sensitive operator, its description, and an example:¹
¹Microsoft. (2022, Mar. 19). About Comparison Operators (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_comparison_operators. [Accessed: Mar. 30,
2022].
Advanced Conditions 204
PowerShell rarely uses explicit case-insensitive operators since the language supports this mode
by default. Explicit case-insensitive operators have names prefixed with an ‘i’, similar to case-
sensitive operators.²
# Example 1:
True
True
The switch statement is an operator used to test a value against multiple conditions.³
three
The switch statement accepts optional parameters (-Regex, -Wildcard, -Exact and
-CaseSensitive) that are used with two parameter sets. The -Regex, -Wildcard and -Exact
parameters are mutually exclusive, controlling [string]"string" matching behavior. The
optional -CaseSensitive parameter enables case-sensitive matching.
²Microsoft. (2022, Mar. 19). About Comparison Operators (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_comparison_operators. [Accessed: Mar. 30,
2022].
³Microsoft. (2022, Mar. 19). About Switch (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://learn.microsoft
.com/en-us/powershell/module/microsoft.powershell.core/about/about_switch. [Accessed: Mar. 30, 2022].
Advanced Conditions 205
Match
This matches
This also matches
Not skipped
Also not Skipped
Match
15
16 # Example 2: With -CaseSenitive:
17 $value = 'value'
18 switch -CaseSensitive ($value) {
19 'vAlue' {
20 'Wrong case'
21 }
22 'Value' {
23 'Also wrong case'
24 }
25 'value' {
26 'Match'
27 }
28 }
# Example 1:
Wrong case
Also wrong case
Match
# Example 2:
Match
23 { $null } {
24 'This does not match'
25 }
26 }
This matches
This also matches
8.2.6 Default
The default statement is an optional reserved statement used when all other conditions aren’t
met.⁶
⁶Microsoft. (2022, Mar. 19). About Switch (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://learn.microsoft
.com/en-us/powershell/module/microsoft.powershell.core/about/about_switch. [Accessed: Mar. 30, 2022].
Advanced Conditions 209
# Example 1:
default
# Example 2:
success
# Example 1:
this is one
this is two
this is three
default
# Example 2:
this is one
# Example 3:
this is one
this is three
default
Here we replace the switch statement with equivalent logic using foreach and if statements:
Advanced Conditions 210
matched
Loop control statements are applied to control the outcome of the execution. Since switch
functions the same as a loop statement, break and continue will change the behavior. The
break statement exits the program loop, the switch statement in this case, skipping all future
items parsed into the statement.⁷ The continue statement will stop and jump to the top of the
innermost loop, processing the next item in the switch statement.
Example 13: Using the break and continue statements with switch
1 # Example 1: Use of a break statement in a singular array
2 $string = 'value'
3 switch ($string) {
4 'value' {
5 'This matches'
6 # Add the break statement
7 break
8 }
9 # Note that the matching condition is the same as
10 # the previous condition. Without the break statement,
11 # this wouldn't be skipped.
12 'value' {
13 'This also matches'
14 }
15 'values' {
16 'This does not match'
17 }
18 }
19
20 # Example 2: Use of a continue statement in a singular array
⁷Microsoft. (2022, Mar. 19). PowerShell 101: Chapter 6 - Flow control - Break, Continue, and Return. Microsoft Docs. [Online]. Avail-
able: https://learn.microsoft.com/en-us/powershell/scripting/learn/ps101/06-flow-control#break-continue-and-return. [Accessed: Mar. 31,
2022].
Advanced Conditions 211
85 'values' {
86 'This does not match'
87 }
88 }
# Example 1:
This matches
# Example 2:
This matches
# Example 3:
This matches
# Example 4:
This matches
This matches
The switch statement can match on different object-types. In the following example, different
object-types are tested from an array:
Example 14: Using expressions with a switch statement to change behavior based on type
1 # Parsing an Array
2
3 $values = @(
4 # HashTable
5 @{
6 Key = 'HashTable Value'
7 },
8 # PSCustomObject
9 [PSCustomObject]@{
10 FirstProperty = 'Property Value'
11 SecondProperty = 'Some other property value'
12 }
13 # DateTime
14 ([datetime]::Now)
15 )
16
17 switch ($values) {
18 { $_ -is [Hashtable] } {
19 $_.Key
20 }
21 { $_ -is [PSCustomObject] } {
22 $value = $_
23 $_ | Get-Member -MemberType NoteProperty | ForEach-Object {
24 $value."$($_.Name)"
25 }
26 }
27 { $_ -is [DateTime] } {
28 $_.Date.ToString()
29 }
30 }
Advanced Conditions 213
HashTable Value
Property Value
Some other property value
11/10/2021 12:00:00 AM
36 )
37 $arr -is [string[]]
The -isnot operator is the inverse to -is. It’s used to test that the object instance is not of the
specified type.⁹
⁹Microsoft. (2022, Mar. 19). About Comparison Operators (Microsoft.PowerShell.Core) - Type comparison. Microsoft Docs.
[Online]. Available: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_comparison_opera-
tors#type-comparison. [Accessed: Mar. 31, 2022].
Advanced Conditions 215
In both examples below, the output is the same whether using explicit typecasting or the -as
operator:
In the examples below, "String" can’t be typecast to a [DateTime], but using -as suppresses
the error.
¹⁰Microsoft. (2022, Mar. 19). About Type Operators (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://learn
.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_type_operators. [Accessed: Mar. 31, 2022].
Advanced Conditions 216
# Example 1:
Red
Green
Blue
Yellow
Black
White
# Example 2:
0 Red
1 Green
2 Blue
3 Yellow
4 Black
5 White
# Example 3:
Green
Red
Red
Basic enums can only set a singular value within it ([Enum]::Green), which is fine for constant
use cases. Ideally, from the example above, multiple colors should be allowed to be selected. We
can change the functionality by using a ‘flag enum’ (or bitflag). Within the flag enum, item
properties must use powers of two (for example 1,2,4,8,16,…) to work with bitwise operators.
0 0
1 Red
2 Green
3 Red, Green
4 Blue
5 Red, Blue
6 Green, Blue
7 Red, Green, Blue
8 Yellow
9 Red, Yellow
10 Green, Yellow
11 Red, Green, Yellow
12 Blue, Yellow
13 Red, Blue, Yellow
14 Green, Blue, Yellow
15 Red, Green, Blue, Yellow
16 Black
Since version 5.0, PowerShell supports the enum statement, enabling enums to be declared
natively without Add-Type.¹⁴ Syntax:
1 enum <enum-name> {
2 <label> [= <int-value>]
3 }
The enum-name, follows directly after the enum statement describing the enum type, followed
by the labels. Use the [Flags()] attribute to create flag enums, as with Add-Type. Note in the
example below how no commas are required at the end of each of the labels. In the example
below, the Colors enum used previously will be updated:
Unlike enums created with Add-Type, you can update native enums with new values.
Enum Reference
For more information, refer to the about Enum¹⁵ page at Microsoft Docs.
¹⁴Microsoft. (2021, Sep. 28). About Enum. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/powershell/mod-
ule/microsoft.powershell.core/about/about_enum. [Accessed: Mar. 31, 2022].
¹⁵https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_enum
Advanced Conditions 219
• Base-10 [1,10,100,1000]: Each position in this block increments the power of 10 (10⁰, 10¹, 10²,
and 10³ in this case). Note: Any number raised to the zeroth power equals one.
• Base-2 [1,2,4,8,16,32,64,128,256,512,1024]: Each position in this block increments the power
of 2 (2⁰, 2¹, 2², 2³, … 2¹⁰). One disadvantage of base-2 is that the sequence of digits needed to
represent a number is usually longer than in base-10. Within base-2, each digit is called a
‘bit’, and a grouping of eight bits is called a ‘byte’. Binary numbers are typically formatted
into groups of 4 digits called ‘nibbles’, making them easier to read. Bit groups having less
than 4 bits are prefix-padded with zeros.
For example: The 7-digit binary number 1000001 is formatted as 2 nibbles: 0100 0001.
The Least Significant Bit (LSB) is the position that represents the binary one’s place of an integer.
The Most Significant Bit (MSB) is the “left-most” binary digit available for an integer type. For
example:
MSB LSB
0 1 0 0 0 0 0 1
Below are two tables representing the 6 least significant places for base-10 and base-2 (in
decimal):
base-10:
100000 10000 1000 100 10 1
base-2:
32 16 8 4 2 1
For example, the decimal number “One-Thousand and Forty-Two” is represented in base-10 as:
1000 100 10 1
1 0 4 2
This same value is represented: 1042 decimal = 0100 0001 0010 binary.
Advanced Conditions 220
ANDLogicGate
The AND logic gate requires both inputs to be ‘1’ ($true) to output ‘1’. All other combinations
will be ‘0’ ($false).
For example:
1001 - 9
0101 - 5 (AND)
----
0001 - 1
Advanced Conditions 221
ORLogicGate
The OR logic gate requires only one input to be ‘1’ ($true) to output ‘1’.
For example:
1001 - 9
0101 - 5 (OR)
----
1101 - 13
Advanced Conditions 222
NOTLogicGate
Input 1 Output
1 0
0 1
For example:
1001 - 9 (NOT)
----
0110 - 6
Advanced Conditions 223
XORLogicGate
The XOR (exclusive-OR) logic gate is similar to an OR gate; however, if both inputs are ‘1’, the
output is ‘0’.
For example:
1001 - 9
0101 - 5 (XOR)
----
1100 - 12
64
The following table shows the same operation using binary (base-2):
1 0 0
0 1 0
0 0 0
0 1 0
109
The following table shows the same operation using binary (base-2):
1 0 1
0 1 1
0 0 0
0 1 1
45
The following example will perform the same calculation in binary (base-2):
1 0 1
0 1 1
0 0 0
0 1 1
Advanced Conditions 226
-73
The following example will perform the same calculation in binary (base-2):
Value 1 Output
0 1
1 0
0 1
0 1
1 0
0 1
0 1
0 1
• The output is 1011 0111 (base-2), which is 183 (base-10), different from the expected result
to –73 (base-10).
Why? Technically, the output is correct. The table expresses an unsigned binary NOT operation,
while PowerShell’s -bnot operator returns a signed integer (in this case, a 32-bit [Int]/[Int32])
for [Int]$Value1.
Signing describes an integer object’s capability to process negative numbers, using various
‘methods of representation’ for negative numbers. [Int] values are a formatted representation
of a number in 32 bits (or 4 bytes), using the leading bit (1000 0000 ...) to denote a positive
or negative number. For 8 bits, the leading bit is effectively equivalent to -128.
Signed:
Advanced Conditions 227
-128 64 32 16 8 4 2 1
1 0 1 1 0 1 1 1
−128 + 32 + 16 + 4 + 2 + 1 = −73
Unsigned:
128 64 32 16 8 4 2 1
1 0 1 1 0 1 1 1
128 + 32 + 16 + 4 + 2 + 1 = 183
So, repeating the same example as before, extended to a signed 32-bit integer:
• Value 1 = 0000 0000 0000 0000 0000 0000 0100 1000 (base-2) [72 (base-10) represented in
binary (base-2) as a signed integer]
• Perform a NOT inversion with the input:
Value 1 Output
0 1
0 1
0 1
0 1
… …
0 1
0 1
0 1
0 1
0 1
1 0
0 1
0 1
1 0
0 1
0 1
0 1
Advanced Conditions 228
• Output is 1111 1111 1111 1111 1111 1111 1011 0111 (base-2), which is –73 (base-10).
288
The following example will perform the same bit-shift in binary (base-2):
• Value 1 = 0000 0000 0000 0000 0000 0000 0100 1000 (base-2) [72 (base-10) represented in
binary (base-2) as a signed integer]
• Perform a bit shift to the left 2 times.
• Output will be: 0000 0000 0000 0000 0000 0001 0010 0000 (base-2) [288 (base-10) represented
in binary (base-2) as a signed integer]. Note how the bits have been shifted to the left.
18
Note: Integer types can’t represent floating-point numbers. Bits shifted past the least significant
bit (LSB) or binary one’s place are lost.
The following example will perform the same bit-shift in binary (base-2):
• Value 1 = 0000 0000 0000 0000 0000 0000 0100 1000 (base-2) [72 (base-10) represented in
binary (base-2) as a signed integer]
• Perform a bit shift to the right 2 times.
• Output will be: 0000 0000 0000 0000 0000 0000 0001 0010 (base-2) [18 (base-10) represented
in binary (base-2) as a signed integer]. Note how the bits have been shifted to the right this
time.
Advanced Conditions 229
# Example 1:
Red, Blue
# Example 1a:
5
Note the integer output is 5. Red is equal to ‘1’, and Blue is equal to ‘4’. The -bor operator
combined the two integers:
In the following example, the -band operator will be used to search the value for ‘Red’ and
‘Yellow’.
Advanced Conditions 230
# Example 1:
Red
# Example 1a:
1
Note the integer output is 1 being ‘Red’, matching only the found integer within the flag. In the
following table, the -band calculation is performed:
True
In this example, the comparison will return $false since the string ‘hello’ is not present:
¹⁶Microsoft. (2022, Mar. 19). About Comparison Operators (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Avail-
able: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_comparison_operators#matching-op-
erators. [Accessed: Mar. 31, 2022].
Advanced Conditions 231
False
Wildcard characters are used to represent one or more characters within a string (for example:
*, ? and []). These are:
To escape matching for a non-wildcard string, use a singular backtick “’ to escape the character.
# Example 1:
True
Different configurations can be used to fit different scenarios. The table below demonstrates the
different configurations of the wildcards:¹⁷
¹⁷Microsoft. (2022, Mar. 19). About Comparison Operators (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Avail-
able: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_comparison_operators#matching-op-
erators. [Accessed: Mar. 31, 2022].
Advanced Conditions 232
The -like operator can also be applied to lists and arrays, where the array is defined on the left
side. Instead of a boolean, the operator will return any matching values in the array.
bees
timber
If the array or list doesn’t contain a value, -like will return an empty array, which can be tested
using -not ($arr -like '*is not in list*').
# Example 1
True
# Example 2
0
# Example 3
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True Object[] System.Array
The -notlike operator is the inverse of the -like operator.¹⁸ This also applies to matching
against lists and arrays, where -notlike will return an array of strings that don’t match.
Example 35: -notlike provides the inverse results for scalars and arrays
1 $arr = 'string','bees','timber'
2 # Output 1
3 $arr -notlike '*is not in list*'
4 # Output 2
5 ($arr -notlike '*is not in list*').Count
# Output 1
string
bees
timber
# Output 2
3
¹⁸Microsoft. (2022, Mar. 19). About Comparison Operators (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Avail-
able: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_comparison_operators#matching-op-
erators. [Accessed: Mar. 31, 2022].
Advanced Conditions 234
The -match and -notmatch operators use regular expressions to search for a pattern within a
string.¹⁹ The left side of the expression contains the string, whereas the right side contains the
regular expression. When parsing arrays into the left side, -match functions similar to -like,
where it outputs each matching value in the defined array.
# Example 1:
True
# Example 2:
this is a string
this also is a string
INOperatorImage
True
Conversely, the -contains operator functions in the right-hand many-to-one case, where the
list or array may contain a value.
Advanced Conditions 236
ContainsOperatorImage
True
True
True
True
True
True
True
True
The -in and -contains operators simplify code within conditions by removing the need to use
a Where-Object within the logic.
These operators are limited to comparison with members of a list or array. When interrogating
a collection of objects, use an expression to store the desired property values from each object
into an array.
In the following example, the Name property array is returned from Get-Process, allowing the
use of -contains.
Advanced Conditions 238
Example 41: Checking an array of process names for a value with -contains
1 # Fetch the Processes and Select the Name property
2 if ((Get-Process).Name -contains 'pwsh') {
3 # Do something
4 }
The -notin and -notcontains operators test the inverse to -in -and -contains. They are used
to test if an object doesn’t exist in a list or array.
8.9 -replace
Syntax: <input> -replace <regular-expression>, <substitute>
The -replace operator is a regex-enabled string operator, similar to the String.Replace()
method.²¹
For example, the -replace operator can perform a basic string replacement:
Reviewing the example, the variable $Name is declared as: 'Hello! My name is: Ben!'. Using
the -replace operator, Ben is changed to Michael.
Unlike String.Replace(), the -replace operator uses regex to match patterns to replace. In
this example, the first two words ‘X marks’ are replaced in ‘Lets find the spot!’
²¹Microsoft. (2022, Mar. 19). About Comparison Operators (Microsoft.PowerShell.Core) - Replacement operator. Microsoft
Docs. [Online]. Available: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_comparison_op-
erators#replacement-operator. [Accessed: Mar. 31, 2022].
Advanced Conditions 239
PowerShell is cool!
When parsing a string that contains regex queries, use the [Regex]::Escape() method to
escape these characters.
Syntax:
1. The condition (condition). The condition is wrapped in parentheses () (like the if-
statement) and followed by a question mark.
2. The true expression (precedes the colon). The true expression denotes the output if the
condition evaluates to $true.
3. The false expression (follows the colon). The false expression denotes the output if the
condition evaluates to $false.
²²Microsoft. (2022, Mar. 19). About Comparison Operators (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_comparison_operators. [Accessed: Mar. 30,
2022].
Advanced Conditions 240
# Example 1:
isPresent
# Example 2:
notPresent
# Example 3:
isPresent
Syntax:
The null-coalescing operator ?? tests a value or expression and returns an alternative value if
$null.²³
²³Microsoft. (2022, Mar. 19). About Comparison Operators (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_comparison_operators. [Accessed: Mar. 30,
2022].
Advanced Conditions 241
# Example 1:
'not-null'
# Example 2:
'is null'
Example 49: The traditional approach for handling null cmdlet results
1 $params = @{
2 URI = 'https://example.com/'
3 ErrorAction = 'SilentlyContinue'
4 Method = $(
5 if ($method -ne $null) { $method }
6 else { 'GET' }
7 )
8 }
9
10 $result = Invoke-WebRequest @params
11 if ($null -eq $result) {
12 $result = @{error = 'no response'}
13 }
Example 50: Handling null cmdlet results concisely with the null-coalescing operator
1 $params = @{
2 URI = 'https://example.com/'
3 ErrorAction = 'SilentlyContinue'
4 Method = $method ?? 'GET'
5 }
6
7 $result = Invoke-WebRequest @params ?? @{error = 'no response'}
It’s important to remember that empty [String] objects and those with only white space
characters are not $null and the right side will not be evaluated in these cases. This is also
the case for values that evaluate to false.
Advanced Conditions 242
Example 51: Empty strings and false values don’t count as null conditions
1 # Example 1:
2 '' ?? 'is null'
3
4 # Example 2:
5 ' ' ?? 'is null'
6
7 # Example 3:
8 $false ?? 'is null'
# Example 1:
# Example 2:
# Example 3:
False
The null-coalescing assignment operator is used to assign an expression to a variable only if the
variable value is $null.²⁴
²⁴Microsoft. (2022, Mar. 19). About Comparison Operators (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_comparison_operators. [Accessed: Mar. 30,
2022].
Advanced Conditions 243
Example 53: Assigning values only to null variables with the null-coalescing assignment operator
1 # Example 1: $var will not be $null
2 $var = 'not-null'
3 $var ??= 'is null'
4 $var
5
6 # Example 2: $var will be $null
7 $var = $null
8 $var ??= 'is null'
9 $var
10
11 # Example 3: $var will be Empty String
12 $var = ''
13 $var ??= 'is null'
14 $var
# Example 1:
not-null
# Example 2:
is null
# Example 3:
Example 54: The traditional approach for assigning alternative values to null variables
1 $var = Do-Something
2 if ($null -eq $var) {
3 $var = 'something else'
4 }
Example 55: Assigning a null-conditional alternative value with the null-coalescing assignment operator
1 $var = Do-Something
2 $var ??= 'something else'
Null-Conditional object selection operators allow you to select object properties tentatively (?.)
or array items (?[]), returning $null if the property or item is $null.²⁵
The character ‘?’ is permitted within variable names; variables must be wrapped with curly
braces ${var}. Subexpressions are also permitted as well $($var)?.Method().
²⁵Microsoft. (2022, Mar. 19). About Comparison Operators (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_comparison_operators. [Accessed: Mar. 30,
2022].
Advanced Conditions 244
# Example 1:
test
# Example 2:
strings
# Example 3:
toast
# Example 4:
8.13.1 Examples of ?.
The following examples demonstrate the use of: ‘?.’:
Example 57: Accessing a valid object property with and without the null-conditional operator
1 # Example 1: Create an Object and Select a Property
2 # Nothing is changed here.
3
4 $obj = [PSCustomObject]@{
5 Property = 'Value'
6 }
7 $obj.Property
8
9 # Example 2: Create an Object and Select the Property
10 # using a Null Conditional Operator
11
12 $obj = [PSCustomObject]@{
13 Property = 'Value'
14 }
15 # Note that the variable needs to be wrapped in curly braces:
16 ${obj}?.Property
Advanced Conditions 245
# Example 1:
Value
# Example 2:
Value
Example 58: Trying to access an invalid property with the null-conditional operator
1 # Example 1: Create an Object. However, the property is $null
2 $obj = [PSCustomObject]@{
3 Property = $null
4 }
5 # Note that the property returns nothing
6 ${obj}?.Property
7 # Let's test to make sure it's null
8 ${obj}?.Property -eq $null
9
10 # Example 2: Create an Object and select a different property:
11 $obj = [PSCustomObject]@{
12 Property = 'value'
13 }
14 # Note that the property returns nothing
15 ${obj}?.AnotherProperty
16 # Let's test to make sure it's null
17 ${obj}?.AnotherProperty -eq $null
# Example 1:
True
# Example 2:
True
The use cases for this operator are unclear since PowerShell 5.1 already casts empty object
properties to $null (null-soaking). However, attempting to access null properties in strict mode
throws a reference error.²⁶ This operator therefore has some uses in strict mode.
Example 59: The null-conditional member access operator is useful in strict mode
1 # Example 1: Null-soaking in default mode
2 $obj = [PSCustomObject]@{
3 Property = $null
4 }
5 $obj.Property.SecondProperty.ThirdProperty -eq $null
6
7 # Example 2: Reference errors in strict mode
8 Set-StrictMode -Version 2
9 $obj.Property.SecondProperty -eq $null
10
11 # Example 3: Safe access to null properties with ?.
12 $obj.Property?.SecondProperty -eq $null
13
14 # Example 4: The ?. operator doesn't protect against nonexistent properties
²⁶Microsoft. (2022, Mar. 08). Set-StrictMode (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://learn.microsoft
.com/en-us/powershell/module/microsoft.powershell.core/set-strictmode. [Accessed: Apr. 03, 2022].
Advanced Conditions 246
15 # in strict mode
16 ${obj}?.AnotherProperty -eq $null
17
18 Set-StrictMode -Off
# Example 1:
True
# Example 2:
PropertyNotFoundException: The property 'SecondProperty' cannot be found
on this object. Verify that the property exists.
# Example 3:
True
# Example 4:
PropertyNotFoundException: The property 'AnotherProperty' cannot be found
on this object. Verify that the property exists.
# Example 1:
2
# Example 2:
2
# Example 3:
InvalidOperation: Cannot index into a null array.
# Example 4:
True
# Example 5:
True
PowerShell loop labeling adds a label to a loop construct, which is used to control the flow of
the loop.²⁷ ²⁸ Labels are defined at the start of a loop construct, prefixing the label with a colon
:.
Once the loop label has been defined, break/continue statements can reference the label to exit
at that loop level. This is useful when managing nested loop statements, since there is less need
for complex branching logic. In the following example, the nested loop statement will exit the
parent loop once the counter reaches 3:
²⁷Microsoft. (2014, May. 08). PowerShell Looping: Advanced Break. Microsoft Dev Blogs. [Online]. Available: https://devblogs.microsoft
.com/scripting/powershell-looping-advanced-break/. [Accessed: Mar. 31, 2022].
²⁸Microsoft. (2022, Mar. 19). About Break (Microsoft.PowerShell.Core) - Using a labeled break in a loop. Microsoft
Docs. [Online]. Available: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_break#using-a-
labeled-break-in-a-loop. [Accessed: Mar. 31, 2022].
Advanced Conditions 248
Nested Counter: 0
Nested Counter: 1
Nested Counter: 2
In the following example, the Continue statement will be used to increment the top-level
counter:
Loop labels are unsupported in both the foreach and ForEach-Object statements and the
$List.ForEach() method. For example, loop labels do not affect the foreach statement:
12 }
13
14 Write-Host "Top Level Counter: $item Nested Counter: $i"
15
16 }
17 }
• If the operator is an assignment operator (=, +=, -=, *=, /=, %=, ++, and --), a cast operator
([type]$val) or a negation operator (-not, -isnot, and -notcontains), the expression
is evaluated from right to left.
• Otherwise, the precedence order is applied from the list (see table below):
In this example, a variable is declared containing the string value 'value' -eq 'value'. The
process is as follows:
2. The comparison operator (-eq) is found in 'Value' -eq 'Value'. The comparison oper-
ator (-eq) is higher than the assignment operator, so it’s evaluated first, producing $true.
3. The assignment operator (=) comes last and will assign the output to the variable $var
(reading from right-to-left).
The following table describes the order of precedence; items in the same precedence group follow
the same precedence rules:
Example 66: Array index operators have a higher precedence than commas
1 1,2,'string'[0]
1
2
s
string
Name Value
---- -----
index 1
False
3
7. The comma operator has the next highest precedence, and the items are cast as an array.
8. The -not expression is evaluated:
Example 69: Left-to-right processing is used for operators with equal precedence
1 1 -eq 2 -eq 2 -ne 2
True
DateTime Processes
-------- ---------
1/1/2021 12:00:00 AM 605261729300122641423220776299883124035616
BinaryOperation AnotherValue
--------------- ------------
-32 4
• Once the expression is evaluated, the remaining operators (., -join and =) are evalu-
ated. The member access operator (.) has the next highest precedence over the -join
operator and the assignment operator (=). PowerShell will evaluate (expression).id
first. The output of (expression).id:
6052
6172
9300
12264
14232
20776
29988
31240
35616
• Two operators remain (-join and =). The -join operator has the higher precedence
and will be evaluated first. The output of (expression).id -join '':
605261729300122641423220776299883124035616
³⁰https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_switch
³¹https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_if
³²https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_assignment_operators
³³https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_operator_precedence
³⁴https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_operators
³⁵https://github.com/PowerShell/PowerShell/blob/master/src/System.Management.Automation/engine/parser/tokenizer.cs
³⁶https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_wildcards#long-description
9. Logging
On the surface, the subject of logging seems simple. It could be as straightforward as “saving
output for later use or review.” It is, however, much more nuanced—and important—than many
realize.
This chapter covers the basics of logging in PowerShell. It lays the groundwork for establishing
some best practices that most developers and engineers should implement. The chapter aims to
combine the latest information available for a range of logging options and present it to you in
a single resource.
At the time of writing, Windows PowerShell is at version 5.1, and PowerShell is at version 7.2.
These version numbers aren’t referenced here. Rather, the edition names are used when there
are differences in functionality between editions.
There are several well-known logging options for Windows PowerShell.¹ However, with Pow-
erShell now available—a cross platform solution—there are more differences that need to be
addressed. Methods commonly employed in the past may not be the best solution in the
present and future. You may even find that approaches you’ve taken in the past with Windows
PowerShell are no longer available in PowerShell.
There are now technical considerations needed with different versions and platforms:²
The two categories of logging, system-level and on-demand. Both provide a way to track the
actions of your code. Both options are presented ahead and you can use either, or both, based on
your needs.
The chapter presents simple use cases as examples. Note that these are simplistic and intended
to show the ways you can use logging. These aren’t scripts that you would use in production.
¹Microsoft. (2022, Mar. 19). About Logging (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://learn.microsoft
.com/en-us/powershell/module/microsoft.powershell.core/about/about_logging. [Accessed: Aug. 03, 2022].
²Microsoft. (2022, May. 16). Differences between Windows PowerShell 5.1 and PowerShell 7.x. Microsoft Docs. [Online].
Available: https://learn.microsoft.com/en-us/powershell/scripting/whats-new/differences-from-windows-powershell?view=powershell-
7.2. [Accessed: Aug. 03, 2022].
258
Logging 259
Logging all code run on a system for security is a common security requirement. You may also
need to have JEA session logs available for review. You can learn more about JEA in the Just
Enough Administration chapter.
For high-impact changes, such as writing code to change users’ access in AD (Active Directory)
or Azure, your output should be comprehensive enough that a person or system could identify
and undo all the changes.
When working with live data that affects the ability of others to work, taking this extra effort
and time to create detailed output can reduce the impact if something goes wrong. You should
also test the output to make sure you could use it to revert a change in practice.
• Credentials
• Passwords
• Access tokens
• API keys
• Encryption keys
• Personally identifiable information
• Social Security or national identity numbers
• Credit card numbers
• Any other data protected by laws such as GDPR (General Data Protection Regulation)³
System-level logging may be configured in such a way that it will capture all on-screen input
and output. No matter what logging options you select, it’s important to keep secrets out of your
code, and know when user input may expose secrets within your logs.
In most cases, system-level logging prevents properly identified secrets from being captured.
However, if your code is written such that the system doesn’t recognize sensitive data, secrets
may be stored in plain text in your logs. Ensure you understand how to identify sensitive data
and prevent its accidental capture.⁴
The chapter discusses ways to protect sensitive data stored in logs later. However, following best
practices and preventing it from being logged at the start is the best solution.
While outside the scope of logging, the use of the SecretManagement⁵ Module and Azure
KeyVault⁶ would help ensure this data is stored and protected properly. There’s an informative
Microsoft DevBlogs article⁷ for getting up to speed on this module.
³Proton AG. (2022, May. 26). What is GDPR, the EU’s new data protection law?. GDPR.EU. [Online]. Available: https://gdpr.eu/what-
is-gdpr/. [Accessed: Aug. 29, 2022].
⁴Microsoft. (2022, Mar. 18). About_Logging (Microsoft.PowerShell.Core) - Protected Event Logging. Microsoft Docs. [Online].
Available: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_logging?view=powershell-
5.1#protected-event-logging. [Accessed: Aug. 03, 2022].
⁵https://www.powershellgallery.com/packages/Microsoft.PowerShell.SecretManagement
⁶https://learn.microsoft.com/en-us/azure/key-vault/general/basic-concepts
⁷https://devblogs.microsoft.com/powershell/secretmanagement-and-secretstore-are-generally-available/
Logging 261
9.5.1 Windows
There are a few options to enable logging in Windows.¹¹ For enterprise systems, the enablement
should be completed by your IT or security department, which has historically been deployed
via Group Policy or Intune. For a standalone system, you can use the Local Group Policy Editor¹²
(gpedit.msc) to access the same settings. Once the Local Group Policy Editor is open, find the
following configuration path:
Computer Configuration\
Administrative Templates\
Windows Components\
Windows PowerShell
• Turn on Module Logging: Enabled. Use * in the Module Names selection to apply logging
to all modules, or add each module you wish to log.
• Turn on PowerShell Script Block Logging: Enabled.
⁸https://devblogs.microsoft.com/powershell/powershell-the-blue-team/
⁹SecTor. (2019, Oct. 09). Powershell is Dead. Long Live C# - Lee Kagan. SecTor 2019. [Online]. Available: https://sector.ca/sessions/
powershell-is-dead-long-live-c/. [Accessed: Aug. 29, 2022].
¹⁰BSides Scotland. (2019, Apr. 23). Powershell Is DEAD – Epic Learnings! - Ben Turner and Doug McLeod - BSides Scotland 2019.
YouTube. [Online]. Available: https://www.youtube.com/watch?v=PPDUU3ObX88. [Accessed: Aug. 29, 2022].
¹¹Microsoft. (2022, Mar. 18). About Group Policy Settings (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_group_policy_settings. [Accessed: Aug. 29,
2022].
¹²https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn789185(v=ws.11)
Logging 262
– Check the box for Log script block invocation start/stop events.
– Set Execution Policy to Allow local scripts and remote signed scripts. This is the default
for Windows 10+ and Windows Server 2016+.
Another setting only used by enterprises is found in the Local Group Policy Editor, at the
following configuration path:
Computer Configuration\
Administrative Templates\
Windows Components\
Event Logging\
Enable Protected Event Logging
This setting requires PKI (Public Key Infrastructure) certificates to encrypt and decrypt sensitive
data being written to the Windows Event Logs. You wouldn’t use this for standalone systems, as
the private key shouldn’t be available on the computer where the logs are created and encrypted.
Instead, the private key would live only in the location where the logs are being collected and
decrypted.
An event log consolidation solution like this is highly encouraged in enterprises, but is outside
the scope of this chapter.
PowerShell:
¹³Microsoft. (2022, Mar. 18). About Eventlogs (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://learn.microsoft
.com/en-us/powershell/module/microsoft.powershell.core/about/about_eventlogs?view=powershell-5.1. [Accessed: Aug. 29, 2022].
Logging 263
These are viewable in the Windows Event Viewer¹⁴ or can be queried from PowerShell or other
tools.
If the configuration file doesn’t exist, create it using your preferred text editor, and copy the
sample configuration. Be sure to change the paths to suit your needs.
The configuration options available within the JSON files are the same as those covered for
Windows above:
25 "EnableProtectedEventLogging": false,
26 "EncryptionCertificate": [""]
27 },
28 // Equivalent: Turn on PowerShell Transcription
29 "Transcription": {
30 "EnableTranscripting": true,
31 "EnableInvocationHeader": true,
32 "OutputDirectory": "\\tmp\\new"
33 },
34 // Other settings
35 "UpdatableHelp": {
36 "DefaultSourcePath": "\\temp"
37 },
38 "ConsoleSessionConfiguration": {
39 "EnableConsoleSessionConfiguration": false,
40 "ConsoleSessionConfigurationName": "name"
41 }
42 },
43 "LogLevel": "verbose"
44 }
As you’re writing code, a common troubleshooting method is to write a status message to the
console to see results using Write-Host or Write-Output. This is a quick-and-dirty option;
however, there are better ways to perform this simple task. As you probably know, PowerShell
has multiple output streams available.¹⁶ ¹⁷ The following write cmdlets are available and provide
better solutions than simply sending your output to the console during each run:
• Write-Verbose¹⁸: Show events only when running with the -Verbose parameter.
¹⁶Microsoft. (2022, Mar. 18). About Output Streams (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://learn
.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_output_streams. [Accessed: Aug. 27, 2022].
¹⁷Microsoft. (2022, Mar. 18). About Redirection (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://learn
.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_redirection. [Accessed: Aug. 29, 2022].
¹⁸https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/write-verbose
Logging 265
• Write-Debug¹⁹: Show events only when running with the -Debug parameter.
• Write-Information²⁰: Add informational messages to your output.
• Write-Warning²¹: Adds a warning message to your output.
• Write-Error²²: Declares a non-terminating error, and adds it to the error stream.
• Write-Progress²³: Displays a progress bar in the PowerShell console.
• Write-Host²⁴: Writes customized output to a host and terminates the pipeline (no longer
kills a puppy²⁵).
By default, these cmdlets only write to the console and provide no permanent storage of the
output.
You can read more about transcription on the Microsoft Docs page for Start-Transcript²⁶.
¹⁹https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/write-debug
²⁰https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/write-information
²¹https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/write-warning
²²https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/write-error
²³https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/write-progress
²⁴https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/write-host
²⁵The phrase “Every time you use Write-Host, you kill a puppy…” was coined by PowerShell MVP Don Jones. Around 2013, both
he and the father of PowerShell, Jeffrey Snover, advocated for limited use of this cmdlet as it negatively impacted automation. Many
people were using Write-Host to convey results or information to the user. Write-Host had some limited use cases, but was considered
the wrong tool in the toolbelt for many situations because it didn’t write to any of the available output streams. Starting with PowerShell
5.0, Write-Host is now just a wrapper for Write-Information. The etymology of this phrase could be linked back to a famous line from
the 1946 classic movie It’s a Wonderful Life when the character Zuzu Bailey, daughter of the protagonist, George Bailey, proclaims at the
end of the movie, “Look, Daddy! Teacher says ‘every time a bell rings, an angel gets his wings.’“.
²⁶https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.host/start-transcript
Logging 266
If you want to add data to existing log files without overwriting them, don’t forget to use the
-Append switch where noted below. The cmdlets available are:
When using Export-Csv, it’s best to create PSCustomObjects³¹ before attempting to write the
data.
You could use an approach similar to the following to collect the required data and keep a running
log in a single file:
²⁷https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/out-file
²⁸https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/export-csv
²⁹https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/add-content
³⁰https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/set-content
³¹https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_pscustomobject
Logging 267
"DateTime","Count"
"20210504T1933","8"
Depending on your use case, there are other options available. You can use Set-Content to
replace all the content of an existing file. You can use Add-Content to append content to a file,
without removing its existing content.
Both cmdlets create the file if it doesn’t exist. You can use Set-Content to empty an existing
file and then Add-Content to append additional data to it. This is useful in situations where old
log data can be safely cleared to reduce filesystem space utilization.
Your use case and personal preferences will determine which cmdlets you use.
In the example, a log file is overwritten if $ClearLog is $true. Otherwise, a new entry is
appended to the end of the file.
Example 6: Using Tee-Object to write information to the console and a file simultaneously
1 $FormattedDate = Get-Date -Format 'yyyyMMddTHHmm'
2 $Count = (
3 Get-Service | Where-Object {
4 ($_.Status -eq 'Stopped') -and ($_.StartType -eq 'Automatic')
5 }
6 ).Count
7 "$FormattedDate,$Count" | Tee-Object -Path 'C:\temp\test.csv' -Append
This is useful when you want to monitor the work being done, but you also want to keep a
persistent log.
9.9 History
PowerShell has two native history providers that record the list of commands run, but not their
output.³³
The built-in history is only available within the current PowerShell session. It isn’t persistent
and not available from other open sessions.
# Example 7a:
Id Duration CommandLine
-- -------- -----------
1 0.145 Start-Transcript…
2 0.368 $FormattedDate = Get-Date -Format 'yyyyMMddTHHmm'…
3 0.276 $FormattedDate = Get-Date -Format 'yyyyMMddTHHmm'…
4 0.283 $FormattedDate = Get-Date -Format 'yyyyMMddTHHmm'…
5 0.258 $FormattedDate = Get-Date -Format yyyyMMddTHHmm…
# Example 7b:
Id Duration CommandLine
-- -------- -----------
5 0.258 $FormattedDate = Get-Date -Format yyyyMMddTHHmm…
³³Microsoft. (2022, Mar. 18). About History (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://learn.microsoft
.com/en-us/powershell/module/microsoft.powershell.core/about/about_history. [Accessed: Aug. 29, 2022].
³⁴https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/get-history
Logging 269
6 0.015 Get-History
# Example 7c:
Id Duration CommandLine
-- -------- -----------
1 0.145 Start-Transcript…
You can use standard PowerShell parsing to find specific commands that have run and use
Export-Csv to store the history if you desire.
For example, to find all commands in your history that include ‘UserName’, you could run:
C:\Users\User\AppData\Roaming\Microsoft\Windows\PowerShell\PSReadLine\
ConsoleHost_history.txt
/home/<USERNAME>/.local/share/powershell/PSReadLine/ConsoleHost_history.txt
C:\Users\<USERNAME>\AppData\Roaming\Microsoft\Windows\PowerShell\PSReadLine\
ConsoleHost_history.txt
These files provide logging of the commands run, but not their output.
It’s possible to clean the history file. You would do this to remove typos or mistakes so the
autocompleted items in your history are correct, and don’t cause future confusion or problems.
If you accidentally enter secrets in plain text as part of a command, you’ll have to edit this file to
remove them from history. Many engineers and developers have accidentally typed passwords
into username fields or started typing a second command, not realizing there was already text
in the console. Being able to fix mistakes like these can be done quickly and easily.
If you’re using VS Code (Visual Studio Code), this command opens the history file for review
and editing:
Once in the history, this will be the first highlighted option shown by Predictive IntelliSense
when you type cls. To fix issues like this, open the history file, use the find command to locate
the typo, delete the row containing the typo, and save the file. Once those steps are completed,
and you start a new session, the IntelliSense suggestion will no longer be presented.
Be sure to validate the event log retention settings if you’re going to rely on this solution.
Ahead is a simple example function to show how you can write data to Windows Event logs.
While you can create custom event logs, using the existing Application log and registering
Logging 271
a custom provider allows data to be handled by existing event collection engines without
additional modifications.
Creating your own message and category resource files for your event IDs and categories will
also simplify validation and alerting.³⁷
The function first checks if an event source with the name ‘MyPS5Log’ exists and, if not, creates
it. This step requires elevation and fails otherwise. It then writes an event to the Application
event log using the source name.
Cloud Shell configuration is beyond the scope of this chapter, but you can read more in the
Persisting Shell Storage⁴⁰ article on Microsoft Docs.
You can use this storage in the same way as any local storage. For example, to list all Azure
Resource Group names in your subscription, you can run:
To create a list of all Azure AD users, and write the output to a file, you could run the following
in Cloud Shell:
To see the output, you can access it as you would any other file. To do so from the shell itself,
use Get-Content to read the file into the console:
1 # Example 13a:
2 ResourceGroupName
3 -----------------
4 MyTestApp
5
6 # Example 13b:
7 UserPrincipalName DisplayName
8 ----------------- -----------
9 [email protected] Jane Doe
You could also open it in the web version of VS Code if you need to change it as well:
⁴⁰https://learn.microsoft.com/en-us/azure/cloud-shell/persisting-shell-storage
Logging 273
You can see from these examples that you can specify a relative path in Cloud Shell. Without
specifying the path, file names are relative to the current directory, as in other shells.
If you desire, you can also enable transcription logs for your Cloud Shell sessions. While you can
do this manually as you would in any local session, you can also create a profile that will start
transcription with the launch of any future session.
From within a Cloud Shell session, edit your profile by opening the current user all hosts profile
file with VS Code for the Web.
Example 15: Opening the PowerShell profile for modification in Cloud Shell
code $PROFILE.CurrentUserAllHosts
Add the following two lines to your profile to create a new transcription log file each time you
launch Cloud Shell.
Example 16: Code for the Cloud Shell profile to start transcription
$Now = Get-Date -Format 'yyyyMMddTHHmm'
Start-Transcript "./Transcripts/Transcript-$Now.txt"
If the path you enter doesn’t exist, Start-Transcript creates intermediate folders
automatically.
9.10 Summary
This has been a relatively brief chapter, but you should now have a better grasp of the logging
options available to you in PowerShell. Start with the questions on why you’re logging the
data—and who’ll consume it—to help you choose the options you need to meet your logging
requirements.
Understand that several options have changed from Windows PowerShell to PowerShell. Con-
versely, several options work in the same way regardless of the edition and platform on which
they’re run.
Finally, realize that there are exceptional third-party modules available that may have already
solved the problem in front of you.
⁴³https://learn.microsoft.com/en-us/powershell/scripting/windows-powershell/wmf/whats-new/script-logging
⁴⁴https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_eventlogs?view=powershell-5.1
⁴⁵https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_logging?view=powershell-5.1
⁴⁶https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_logging_windows
⁴⁷https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_logging_non-windows
⁴⁸https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_history
⁴⁹https://learn.microsoft.com/en-us/powershell/module/psreadline/about/about_psreadline
⁵⁰https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_powershell_config
⁵¹https://www.powershellgallery.com/packages/PSReadLine
⁵²https://www.powershellgallery.com/packages/PSFramework
⁵³https://www.powershellgallery.com/packages/Microsoft.PowerShell.SecretManagement
10. Infrastructure as Code (IaC)
This chapter covers the concepts, technical elements, and benefits of IaC. It also provides common
guidance for developing your IaC artifacts. It introduces the concept of Configuration as Code
(CaC) with PowerShell Desired State Configuration (DSC) and its use cases.
10.1 Overview
Historically, datacenters’ infrastructure was built by manual processes and the use of configura-
tion tools requiring human interaction.³ However, with the rise of virtualization and the advent
of cloud computing, scalability became an issue. Large infrastructure deployments were labor-
intensive, inefficient, and plagued with configuration drift and human error. For this reason, the
concept of Infrastructure as Code was born as a method to solve these problems and automate
the deployment of infrastructure resources.
Imagine the sizes of Microsoft, Google, and Amazon cloud infrastructures and the challenges
they would face if they were to provision and configure all their infrastructure manually. You
can’t because it would be impossible and would not scale. IaC is one of many concepts engineers
envisioned to solve those problems and make large-scale deployments feasible.
It’s important to note that IaC isn’t only for cloud computing. You can apply the same concept
to virtual servers or physical hardware hosted on-premises.
• Imperative IaC: You make use of scripts—such as PowerShell and Bash scripts—to define
a series of steps to provision the infrastructure.
¹Microsoft. (2021, Jun. 29). What is Infrastructure as Code?. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-
us/devops/deliver/what-is-infrastructure-as-code. [Accessed: Jul. 09, 2022].
²https://azure.microsoft.com/en-us/free/
³Rendón, D. (2022, Jan.). Why Infrastructure as Code?. In: Building Applications with Azure Resource Manager (ARM). Berkeley, CA:
Apress. ISBN: 978-1-4842-7747-8.
⁴Microsoft. (2021, Jun. 29). What is Infrastructure as Code?. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-
us/devops/deliver/what-is-infrastructure-as-code. [Accessed: Jul. 09, 2022].
275
Infrastructure as Code (IaC) 276
• Declarative IaC: You declare how the infrastructure should be and tools—such as Azure
Resource Manager (ARM), Amazon Web Services (AWS) CloudFormation, and Terraform—
take care of the outcome.
This chapter focuses on Imperative IaC as it relates to PowerShell and Declarative IaC as it
relates to PowerShell DSC.
It’s also important to define the concepts of idempotency and immutability as they’re important
to IaC.
Declarative IaC is the most common implementation of IaC. In the examples used in
this chapter, the infrastructure would most likely be deployed using tools such as Azure
ARM templates, Azure Biceps, or Terraform.
• Scalability: IaC introduces a high level of automation which allows you to provision entire
infrastructure stacks as code.
• Cost: IaC reduces the effort required to deploy, maintain and troubleshoot infrastructure
resources, thus decreasing the operational cost.
• Consistency: You can provision entire environments—such as non-production and
production—and be confident they’re the same because they’re deployed using the same
code.
• Speed: You can deploy infrastructure resources in minutes if not seconds as opposed to
doing it manually, which could take hours or even days.
• CI/CD: You can integrate your scripts and configuration files with your continuous
integration/continuous delivery pipelines to deploy entire applications.
• Source Control: All your configuration files can be stored in source control, which gives
you the ability to version your infrastructure, providing easy roll-back capabilities.
10.4.2 Modular
You shouldn’t define all your resources in a single script or configuration file because, in the long
term, they’ll be hard to maintain. Instead, break your code into modules. Modularizing your code
makes it easier to maintain, increases readability, and allows it to be independently deployed.
Once you have modules defined, you can build blueprints that are a combination of modules for
a specific deployment pattern. Because of the modular approach, each module can be updated
to a newer version without impacting the blueprint or deployments that use previous versions
of the module.
For example, say you create a new pattern that comprises a load-balanced two-tier web
application. You can then bring to life that pattern by creating a blueprint with the required
modules. You would create a blueprint and include the modules for the web application and the
load balancer. You can use those same modules in other patterns.
10.4.3 Versioning
The versioning and modular principles are aligned. Every time you update a module, you should
also update its version. When you version your modules, you can identify changes between
versions that may create unexpected issues in your infrastructure and roll back to an older version
if required. By versioning your modules, when you update them to a newer version, you won’t
impact other teams that may already have utilized the previous version. Versioning also allows
you to track what code changed and who made the changes.
10.4.4 Repeatable
The replacement of manual deployments with IaC and the abstraction of the infrastructure as
code make the process less prone to human error and, most importantly, repeatable. This principle
aligns with the concept of idempotency. This means you should be able to deploy a piece of
code multiple times and always get the same results. Because it’s repeatable, IaC gives you the
confidence that what you’ve deployed in one environment—such as a development—will be the
same when deployed in production.
10.4.5 Disposable
This principle aligns with the concept of immutability. IaC enables you to create, replace, and
destroy resources. A classic example is the time wasted by systems admins troubleshooting issues
with servers. When using IaC, if you encounter issues with servers, you can simply destroy and
recreate them. Therefore, they’re considered disposable.
Infrastructure as Code (IaC) 278
10.4.6 Self-Documented
Infrastructure documentation is often outdated, either because people forget to update it or
unauthorized changes weren’t documented. When you write your infrastructure as code, it
becomes a minimum set of documentation. You can refer to the code and understand what
was deployed and how it was configured. In addition, when all your changes come from code
modifications rather than manual infrastructure changes, the documentation is always kept up-
to-date.
For brevity, the code for connecting to an Azure subscription isn’t displayed. Azure
Resource Manager modules are also required to execute the commands covered in
this example. It’s recommended you execute the commands in Azure Cloud Shell⁷,
as modules are pre-installed and connectivity to your Azure subscription is already
established.
Following the principle of modularization, you’ll build four modules and two scripts. One
is a DSC configuration script and the other imports the modules and executes the required
commands.
⁵Pester Team. (2022, Jun. 22). Quick Start - Pester. Pester Docs. [Online]. Available: https://pester.dev/docs/quick-start. [Accessed: Jul.
10, 2022].
⁶Microsoft. (2021, Jun. 29). What is Infrastructure as Code?. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-
us/devops/deliver/what-is-infrastructure-as-code. [Accessed: Jul. 09, 2022].
⁷https://learn.microsoft.com/en-us/azure/cloud-shell/overview
Infrastructure as Code (IaC) 279
Each module ahead contains a single function. To simplify the provided examples, all function
parameters in the modules are optional and include default values. Comments summarize what
each function is doing.
Some examples ahead are very long, and they may be difficult to read across pages. You
can find the scripts and modules⁸ from this chapter in the Extras repository⁹ for this
book.
10.5.1 Azure-SQL-Server.psm1
Example 1: The Azure-SQL-Server module contains a function that creates a new Azure SQL server instance and
database, and configures the firewall for access
1 function New-AzureSQLServer {
2 param (
3 [Parameter(Mandatory = $false)]
4 [String]$RGName = 'MyApp',
5
6 [Parameter(Mandatory = $false)]
7 [String]$ServerName = 'myuniquesqlserver956x',
8
9 [Parameter(Mandatory = $false)]
10 [String]$DbName = 'mydb',
11
12 [Parameter(Mandatory = $false)]
13 [String]$Location = 'AustraliaEast',
14
15 [Parameter(Mandatory = $false)]
16 [String]$StartIP = '0.0.0.0',
17
18 [Parameter(Mandatory = $false)]
19 [String]$EndIP = '0.0.0.0',
20
21 [Parameter(Mandatory = $false)]
22 [String]$User = 'sqladmin',
23
⁸https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/IaC/Scripts/
⁹https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/
Infrastructure as Code (IaC) 280
24 [Parameter(Mandatory = $false)]
25 [String]$Password = 'MyC0mplexP@ssWord!'
26 )
27
28 ## Create Resource Group if it doesn't exist
29 if (-not (Get-AzResourceGroup -Name $RGName -ea:si)) {
30 $Rg = New-AzResourceGroup -Name $RGName -Location $Location
31 }
32
33 ## Create a username and password for the SQL server.
34 ## You wouldn't have credentials in your code.
35 ## This is for demonstration purposes only.
36 $Pw = ConvertTo-SecureString $Password -AsPlainText -Force
37 $Cred = New-Object PSCredential $User, $Pw
38
39 ## Create SQL Server
40 $SqlParams = @{
41 ResourceGroupName = $RGName
42 ServerName = $ServerName
43 Location = $Location
44 SqlAdministratorCredentials = $Cred
45 }
46
47 $Server = New-AzSqlServer @SqlParams
48
49 ## Create firewall rule allowing access from the specified IP range
50 $FwRuleParams = @{
51 ResourceGroupName = $RGName
52 ServerName = $ServerName
53 FirewallRuleName = 'AllowedIPs'
54 StartIpAddress = $StartIP
55 EndIpAddress = $EndIP
56 }
57
58 $ServerFirewallRule = New-AzSqlServerFirewallRule @FwRuleParams
59
60 ## Create a blank database with an S0 performance level
61 $DbParams = @{
62 ResourceGroupName = $RGName
63 ServerName = $ServerName
64 DatabaseName = $DbName
65 RequestedServiceObjectiveName = 'S0'
66 SampleName = 'AdventureWorksLT'
67 }
68
69 $Database = New-AzSqlDatabase @DbParams
70
71 ## Return database connection string
72 $ConnectionString = @(
73 "Server=tcp:$ServerName.database.windows.net,1433;",
74 "Database=$DbName;",
75 "User ID=$User;",
76 "Password=$Password;",
77 "Trusted_Connection=False;",
78 "Encrypt=True;"
79 )
80
81 return $ConnectionString
82
83 }
Infrastructure as Code (IaC) 281
10.5.2 Azure-Storage-Account.psm1
Example 2: The Azure-Storage-Account contains a function that creates a new Azure storage account
1 function New-AzureStorageAccount {
2 param (
3 [Parameter(Mandatory = $false)]
4 [String]$RGName = 'MyApp',
5
6 [Parameter(Mandatory = $false)]
7 [String]$StorageAccountName = 'myuniquestorage593x',
8
9 [Parameter(Mandatory = $false)]
10 [String]$Location = 'AustraliaEast'
11 )
12
13 ## Create Resource Group if it doesn't exist
14 if (-not (Get-AzResourceGroup -Name $RGName -ea:si)) {
15 $Rg = New-AzResourceGroup -Name $RGName -Location $Location
16 }
17
18 ## Create Storage Account
19 $StorageAccountParams = @{
20 Name = $StorageAccountName
21 ResourceGroupName = $RGName
22 Location = $Location
23 SkuName = 'Standard_LRS'
24 Kind = 'StorageV2'
25 }
26
27 $StorageAccount = New-AzStorageAccount @StorageAccountParams
28
29 return $StorageAccount.StorageAccountName
30 }
10.5.3 Azure-Load-Balancer.psm1
Example 3: The Azure-Load-Balancer module contains a function that creates a new Azure load balancer and
configures the necessary resources
1 function New-AzureLoadBalancer {
2 param (
3 [Parameter(Mandatory = $false)]
4 [String]$RGName = 'MyApp',
5
6 [Parameter(Mandatory = $false)]
7 [String]$LbName = 'MyLb',
8
9 [Parameter(Mandatory = $false)]
10 [String]$Location = 'AustraliaEast'
11 )
12
13 ## Create Resource Group if it doesn't exist
14 if (-not (Get-AzResourceGroup -Name $RGName -ea:si)) {
15 $Rg = New-AzResourceGroup -Name $RGName -Location $Location
16 }
17
18 ## Create public ip and place in variable
19 $PublicIPParams = @{
Infrastructure as Code (IaC) 282
20 Name = "$LbName-pip"
21 ResourceGroupName = $RGName
22 Location = $Location
23 Sku = 'Basic'
24 AllocationMethod = 'static'
25 }
26
27 $Pip = New-AzPublicIpAddress @PublicIPParams
28
29 ## Create load balancer frontend configuration and place in variable
30 $FeIPParams = @{
31 Name = 'fePool'
32 PublicIpAddress = $Pip
33 }
34
35 $FeIP = New-AzLoadBalancerFrontendIpConfig @FeIPParams
36
37 ## Create Backend address pool configuration and place in variable
38 $BePool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'bePool'
39
40 ## Create the health probe and place in variable
41 $ProbeParams = @{
42 Name = 'healthProbe'
43 Protocol = 'http'
44 Port = '80'
45 IntervalInSeconds = '360'
46 ProbeCount = '5'
47 RequestPath = '/'
48 }
49
50 $HealthProbe = New-AzLoadBalancerProbeConfig @ProbeParams
51
52 ## Create the load balancer rule and place in variable
53 $LbRuleParams = @{
54 Name = 'HTTPRule'
55 Protocol = 'tcp'
56 FrontendPort = '80'
57 BackendPort = '80'
58 IdleTimeoutInMinutes = '15'
59 FrontendIpConfiguration = $FeIP
60 BackendAddressPool = $BePool
61 }
62
63 $Rule = New-AzLoadBalancerRuleConfig @LbRuleParams
64
65 ## Create the load balancer resource
66 $LoadBalancerParams = @{
67 ResourceGroupName = $RGName
68 Name = $LbName
69 Location = $Location
70 Sku = 'Basic'
71 FrontendIpConfiguration = $FeIP
72 BackendAddressPool = $BePool
73 LoadBalancingRule = $Rule
74 Probe = $HealthProbe
75 }
76
77 $Lb = New-AzLoadBalancer @LoadBalancerParams
78
79 return $Pip.IpAddress
80
81 }
Infrastructure as Code (IaC) 283
10.5.4 Azure-Virtual-Machine.psm1
Example 4: The Azure-Virtual-Machine module contains a function that creates and configures a new Azure
Windows Server VM
1 function New-AzureVirtualMachine {
2 param (
3 [Parameter(Mandatory = $false)]
4 [String]$RGName = 'MyApp',
5
6 [Parameter(Mandatory = $false)]
7 [String]$VmBaseName = 'MyVM',
8
9 [Parameter(Mandatory = $false)]
10 [int]$VMInstances = 2,
11
12 [Parameter(Mandatory = $false)]
13 [String]$LbName = 'MyLb',
14
15 [Parameter(Mandatory = $false)]
16 [String]$PoolName = 'BEPool',
17
18 [Parameter(Mandatory = $false)]
19 [String]$VnetName = 'VNet-01',
20
21 [Parameter(Mandatory = $false)]
22 [String]$VnetRGName = 'Connectivity',
23
24 [Parameter(Mandatory = $false)]
25 [String]$SubnetName = 'default',
26
27 [Parameter(Mandatory = $false)]
28 [String]$Location = 'AustraliaEast',
29
30 [Parameter(Mandatory = $false)]
31 [String]$User = 'iacadmin',
32
33 [Parameter(Mandatory = $false)]
34 [String]$Password = 'MyC0mplexP@ssWord!'
35 )
36
37 ## Create Resource Group if it doesn't exist
38 if (-not (Get-AzResourceGroup -Name $RGName -ea:si)) {
39 $Rg = New-AzResourceGroup -Name $RGName -Location $Location
40 }
41
42 ## Get vnet, subnet and Backend pool objects
43 $VnetParams = @{
44 Name = $VnetName
45 ResourceGroupName = $VnetRGName
46 }
47
48 $Vnet = Get-AzVirtualNetwork @VnetParams
49 $Subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $Vnet
50
51 $BePoolParams = @{
52 ResourceGroupName = $RGName
53 LoadBalancerName = $LbName
54 Name = $PoolName
55 }
56
57 $BePool = Get-AzLoadBalancerBackendAddressPool @BePoolParams
58
59 ## Deploy Availability Set
Infrastructure as Code (IaC) 284
60 $AvSetParams = @{
61 Location = $Location
62 Name = "$VMBaseName-avset"
63 ResourceGroupName = $RGName
64 Sku = 'Aligned'
65 PlatformFaultDomainCount = 2
66 PlatformUpdateDomainCount = 2
67 }
68
69 $AvSet = New-AzAvailabilitySet @AvSetParams
70
71 for ($i = 1; $i -lt $VMInstances + 1; $i++) {
72 $Id = '{0:d3}' -f $i
73
74 ## Create network interface
75 $NicParams = @{
76 ResourceGroupName = $RGName
77 Location = $Location
78 Name = "$VmBaseName$id-nic"
79 LoadBalancerBackendAddressPool = $BePool
80 Subnet = $Subnet
81 }
82
83 $Nic = New-AzNetworkInterface @NicParams
84
85 ## Create a username and password for the virtual machine.
86 ## You wouldn't have credentials in your code.
87 ## This is for demonstration purposes only.
88 $Pw = ConvertTo-SecureString $Password -AsPlainText -Force
89 $Cred = New-Object PSCredential $User, $Pw
90
91 ## Create a virtual machine configuration
92 $VmSize = 'Standard_DS1_v2'
93 $Pub = 'MicrosoftWindowsServer'
94 $Offer = 'WindowsServer'
95 $Sku = '2019-Datacenter'
96
97 $VmConfigParams = @{
98 VMName = "$VmBaseName$Id"
99 VMSize = $VmSize
100 AvailabilitySetId = $($AvSet.Id)
101 }
102
103 $VMOSParams = @{
104 Windows = $true
105 ComputerName = "$VmBaseName$Id"
106 Credential = $Cred
107 }
108
109 $VMSourceImageParams = @{
110 PublisherName = $Pub
111 Offer = $Offer
112 Skus = $Sku
113 Version = 'latest'
114 }
115
116 $VMNicParams = @{
117 Id = $Nic.Id
118 }
119
120 $VmConfig = New-AzVMConfig @VmConfigParams
121 $VmConfig = $VmConfig | Set-AzVMOperatingSystem @VMOSParams
122 $VmConfig = $VmConfig | Set-AzVMSourceImage @VMSourceImageParams
123 $VmConfig = $VmConfig | Add-AzVMNetworkInterface @VMNicParams
Infrastructure as Code (IaC) 285
124
125 ## Create a virtual machine using the configuration
126 $VmParams = @{
127 ResourceGroupName = $RGName
128 Location = $Location
129 VM = $VmConfig
130 }
131
132 New-AzVM @VmParams
133 }
134 }
At this point, your resources should be deployed. However, you still don’t have your desired state
because no configuration was performed. You still need to configure the servers as web servers
and set the connection string to point to the SQL server. This is where Configuration as Code
complements IaC.
Because you aren’t deploying a full application with source code, you’ll just display the
connection string on the home page so you understand how file manipulation works in
DSC.
• Configurations: This is the outermost part of the script defined by the Configuration
keyword.¹¹ It can contain one or more Node blocks and one or more Resource blocks.
• Nodes: These are the targets of the configuration.¹² You can define multiple computer
names in a Node block.
¹⁰Microsoft. (2021, Dec. 15). PowerShell Desired State Configuration (DSC) Overview. Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/scripting/dsc/overview. [Accessed: Jul. 10, 2022].
¹¹Microsoft. (2022, Jun. 06). DSC Configurations. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/powershell/d-
sc/configurations/configurations. [Accessed: Jul. 10, 2022].
¹²Microsoft. (2021, Dec. 13). Apply, Get, and Test Configurations on a Node. Microsoft Docs. [Online]. Available: https://learn
.microsoft.com/en-us/powershell/dsc/managing-nodes/apply-get-test. [Accessed: Jul. 10, 2022].
Infrastructure as Code (IaC) 286
• Resources: This is where you define the properties of a Resource that sets the desired state
of your configuration.¹³
The Local Configuration Manager (LCM) is the DSC engine that ensures the state you’ve defined
in Configurations is achieved and maintained.¹⁴
PowerShell DSC scripts have the same .ps1 extension as PowerShell scripts. The code snippet
below is an example of a Configuration that declares a Resource of type WindowsFeature¹⁵
and name Web-Server, should be present in the specified Node. It contains a second Script
resource,¹⁶ called EditStartPage, that modifies the IIS start page to display a custom message.
Example 5: Deploy-WebServer.ps1 is a DSC script that installs IIS and adds some custom data to the default start
page
1 Configuration Configure-IISServer
2 {
3 param (
4 [String[]]$ComputerName = 'localhost',
5 [String]$ConnectionString
6 )
7 Node $ComputerName
8 {
9 ## Install IIS
10 WindowsFeature AddIIS {
11 Ensure = 'Present'
12 Name = 'Web-Server' # Internal name for IIS
13 }
14 ## Manipulate home page to display custom strings
15 Script EditStartPage {
16 GetScript = { @{ Result = { "" } } }
17 SetScript = {
18 $GetContentParams = @{
19 Path = "$env:SystemDrive\inetpub\wwwroot\iisstart.htm"
20 }
21 $File = Get-Content @GetContentParams
22 $NewFile = $File | ForEach-Object {
23 $_
24 if ($_ -match 'IIS Windows Server') {
25 '<h1>IaC and CaC: Better Together!</h1>'
26 "<h2>Connection String is $using:ConnectionString</h2>"
27 }
28 }
29 $SetContentParams = @{
30 Value = $NewFile
31 Path = "$env:SystemDrive\inetpub\wwwroot\iisstart.htm"
32 }
33 Set-Content @SetContentParams
34 }
35 TestScript = {
36 $GetContentParams = @{
37 Path = "$env:SystemDrive\inetpub\wwwroot\iisstart.htm"
38 }
39 $Content = Get-Content @GetContentParams
¹³Microsoft. (2021, Dec. 13). DSC Resources. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/powershell/dsc/re-
sources/resources. [Accessed: Jul. 10, 2022].
¹⁴Microsoft. (2021, Dec. 15). Configuring the Local Configuration Manager. Microsoft Docs. [Online]. Available: https://learn.microsoft
.com/en-us/powershell/dsc/managing-nodes/metaconfig. [Accessed: Jul. 10, 2022].
¹⁵Microsoft. (2021, Dec. 13). DSC WindowsFeature Resource. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-
us/powershell/dsc/reference/resources/windows/windowsfeatureresource. [Accessed: Jul. 10, 2022].
¹⁶Microsoft. (2021, Dec. 13). DSC Script Resource. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/powershel-
l/dsc/reference/resources/windows/scriptresource. [Accessed: Jul. 10, 2022].
Infrastructure as Code (IaC) 287
Use the $using: scope modifier¹⁷ to access variables in the DSC script file from within
a script resource block.
The code in the example behaves as a PowerShell function. You can test this configuration by
adding the configuration name Configure-IISServer to the end of the script and running it,
or by “dot-sourcing” it and calling it as a function from within PowerShell.
The following is an example of using the “dot-sourcing” method and calling the function with
no arguments.
Example 6: Load and run DSC configurations by dot-sourcing and running them as functions
1 . .\Deploy-WebServer.ps1
2 Configure-IISServer
Directory: C:\DSC\Configure-IISServer
The following is how you can pass multiple computer names as arguments.
¹⁷Microsoft. (2022, Mar. 18). About Remote Variables. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/power-
shell/module/microsoft.powershell.core/about/about_remote_variables. [Accessed: Jul. 12, 2022].
Infrastructure as Code (IaC) 288
Directory: C:\DSC\Configure-IISServer
Per the results in the example, two Managed Object Format (MOF) files are created inside a new
folder that has the same name as the Configuration block. These MOF files contain what you
defined in your Configuration block. The data in the MOF files is formatted so that it can be
applied using Windows Management Instrumentation (WMI). PowerShell generates one MOF
file per node.¹⁸
You can apply those configurations by running the following PowerShell cmdlet from the same
folder where you executed the previous script.
From the results, you can see the configuration was applied successfully to two resources (AddIIS
and EditStartPage) and no reboot is required. The Type is Initial since you’re applying a new
configuration. Consistency checks made by the LCM show up as the Consistency type.
You can pass the -All switch parameter to display all configuration status history. The output
also contains a property named Mode that represents the LCM Refresh Mode, which can be
Disabled, Push, or Pull.²⁰
¹⁸Microsoft. (2021, Dec. 13). Apply, Get, and Test Configurations on a Node. Microsoft Docs. [Online]. Available: https://learn
.microsoft.com/en-us/powershell/dsc/managing-nodes/apply-get-test. [Accessed: Jul. 10, 2022].
¹⁹Microsoft. (2022, Apr. 11). Get-DscConfigurationStatus. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-
us/powershell/module/PSDesiredStateConfiguration/Get-DscConfigurationStatus. [Accessed: Jul. 12, 2022].
²⁰Microsoft. (2021, Dec. 13). Enacting configurations. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/power-
shell/dsc/pull-server/enactingconfigurations. [Accessed: Jul. 10, 2022].
Infrastructure as Code (IaC) 289
To list the DSC configurations and associated resources on the current node, use Get-
DscConfigurationStatus.²¹
ConfigurationName ResourceId
----------------- ----------
Configure_IISServer [WindowsFeature]InstallIIS
Configure_IISServer [Script]EditStartPage
The LCM has many other settings that you can change. Refer to Configuring the Local Configu-
ration Manager²² for more information on these.
For more information about DSC in general, refer to The DSC Book²³ by Don Jones and Missy
Januszko.
²¹Microsoft. (2022, Apr. 11). Get-DscConfiguration. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-
us/powershell/module/PSDesiredStateConfiguration/Get-DscConfiguration. [Accessed: Jul. 12, 2022].
²²https://learn.microsoft.com/en-us/powershell/scripting/dsc/managing-nodes/metaconfig
²³https://leanpub.com/the-dsc-book
Infrastructure as Code (IaC) 290
Example 11: These two commands publish DSC configurations to Azure storage and apply them to VMs
1 $ResourceGroup = 'MyApp'
2 $VmName = 'MyVM001'
3 $StorageName = 'myuniquestorage593x'
4
5 ## Publish the configuration script to user storage
6 $DSCConfigurationParams = @{
7 ConfigurationPath = '.\Deploy-WebServer.ps1'
8 ResourceGroupName = $ResourceGroup
9 StorageAccountName = $StorageName
10 Force = $true
11 }
12
13 Publish-AzVMDscConfiguration @DSCConfigurationParams
14
15 ## Set the VM to run the DSC configuration
16 $DSCExtensionParams = @{
17 Version = '2.76'
18 ResourceGroupName = $ResourceGroup
19 VMName = $VmName
20 ArchiveStorageAccountName = $StorageName
21 ArchiveBlobName = 'Deploy-WebServer.ps1.zip'
22 AutoUpdate = $true
23 ConfigurationName = 'Configure-IISServer'
24 }
25
26 Set-AzVMDscExtension @DSCExtensionParams
The next logical step is to incorporate the code snippet into the Azure-Virtual-Machine module.
Add this to the bottom of the New-AzureVirtualMachine function of your Azure-Virtual-
Machine.psm1 module file:
Example 12: Add the Azure DSC configuration code to the Azure-Virtual-Machine module
1 for ($i = 1; $i -lt $VMInstances + 1; $i++) {
2
3 $Instance = '{0:d3}' -f $i
4
5 ## Publish the configuration script to user storage
6 $DSCConfigurationParams = @{
7 ConfigurationPath = "$PSScriptRoot\Deploy-WebServer.ps1"
8 ResourceGroupName = $RGName
9 StorageAccountName = $StorageAccountName
10 Force = $true
11 }
12
13 Publish-AzVMDscConfiguration @DSCConfigurationParams
14
15 ## Set the VM to run the DSC configuration
16 $DSCExtensionParams = @{
17 Version = '2.76'
18 ResourceGroupName = $RGName
19 VMName = "$VmBaseName$Instance"
20 ArchiveStorageAccountName = $StorageAccountName
21 ArchiveBlobName = 'Deploy-WebServer.ps1.zip'
22 AutoUpdate = $true
23 ConfigurationName = 'Configure-IISServer'
24 ConfigurationArgument = @{
25 ConnectionString = $ConnectionString
26 }
Infrastructure as Code (IaC) 291
27 }
28
29 Set-AzVMDscExtension @DSCExtensionParams
30
31 }
Example 13: Add the necessary additional parameters to the New-AzureVirtualMachine function
1 [Parameter(Mandatory = $false)]
2 [String]$StorageAccountName,
3
4 [Parameter(Mandatory = $false)]
5 [String]$ConnectionString
You can find the completed Azure-Virtual-Machine.psm1 module file in the IaC Scripts²⁴
folder of the Extras²⁵ repository.
This script uses the default parameter values for the four module functions New-Azure*.
Example 14: The Two-Tier-App-Blueprint.ps1 script ties together all the code to deploy the Azure resources
and apply the DSC configuration
1 ## Import all modules in current directory
2 foreach ($Module in Get-ChildItem $PSScriptRoot -Filter '*.psm1') {
3 Import-Module -Name $Module.FullName
4 }
5
6 ## Create SQL Server and store connection string
7 $ConnectionString = New-AzureSQLServer
8 $ConnectionString = -join $ConnectionString
9
10 ## Create Storage Account
11 $StorageAccountName = New-AzureStorageAccount
12
13 ## Create Load Balancer and store public ip address
14 $Pip = New-AzureLoadBalancer
²⁴https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/IaC/Scripts/
²⁵https://github.com/devops-collective-inc/Modern-IT-Automation-with-PowerShellExtras/tree/main/Edition-01/
Infrastructure as Code (IaC) 292
15
16 ## Create Virtual Machine(s)
17 $VMParams = @{
18 ConnectionString = $ConnectionString
19 StorageAccountName = $StorageAccountName
20 }
21 New-AzureVirtualMachine @VMParams
22
23 ## Load webpage on default browser
24 Start-Process "http://$Pip"
The blueprint script deploys all resources. At the end, it opens your default browser and displays
a customized IIS start page. This customized page is based on the string manipulation you did
with PowerShell DSC in Deploy-WebServer.ps1.
10.8 Conclusion
Infrastructure as Code is at the core of DevOps practices, and it solves real problems. It helps you
deploy consistently across different environments at scale while avoiding configuration drift.
In this chapter, you learned about the key concepts and principles of Infrastructure as Code.
You also learned how to use IaC and CaC together to deploy and configure your infrastructure
resources.
All the examples of IaC are based on PowerShell scripts. You’re encouraged to create a free
trial account in Microsoft Azure and experiment with the scripts provided in previous sections.
Replicate what was explained in this chapter and get comfortable with the basics. Once you
understand the basics, start making changes of your own and re-deploy. There’s nothing like
trial and error to get a deeper understanding of a subject.
²⁹https://learn.microsoft.com/en-us/powershell/dsc/overview?view=dsc-2.0
³⁰https://learn.microsoft.com/en-us/powershell/scripting/dsc/managing-nodes/metaconfig
³¹https://learn.microsoft.com/en-us/powershell/dsc/getting-started/wingettingstarted
³²https://learn.microsoft.com/en-us/powershell/dsc/getting-started/lnxgettingstarted
³³https://learn.microsoft.com/en-us/powershell/dsc/managing-nodes/apply-get-test
³⁴https://learn.microsoft.com/en-us/powershell/dsc/pull-server/enactingconfigurations
³⁵https://leanpub.com/the-dsc-book
³⁶https://azure.microsoft.com/en-us/free/
³⁷https://learn.microsoft.com/en-us/azure/cloud-shell/overview
³⁸https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-overview
IV Using Regexes
Administrators must often work with external data, so pattern matching is a valuable tool.
Regex patterns play an important role in transforming and bringing data into the PowerShell
environment. This role becomes even more significant now that PowerShell is cross-platform.
This section takes you right from the beginning, through to advanced uses for regexes, including:
This chapter recaps the fundamental syntax and structure of regex patterns and explains how
to use these in the PowerShell environment. Accessing Regexes introduces more complex
constructs, solving relevant examples to explain these and demonstrate the power of regexes.
Regex Deep Dive discusses deconstructing and debugging your patterns, looking from the
perspective of the regex engine, and getting familiar with the machinery within. It also covers
the remaining syntax to complete your PowerShell regex toolkit. Finally, Regex Best Practices
rounds off with some best practices, design strategies, and where to go for more on regexes in
PowerShell and in general.
There is some debate about whether modern regexes can still be considered regular expressions.
The regex chapters use the terms regex and regexes to avoid confusion.
Regexes are, in essence, instructions that tell a regex engine how to read some text for you. They
define patterns to match in a text string and can capture substrings. You can use these captures
¹Though regexes is the correct plural, the uncountable plural form regex is also common when referring to the topic as a whole, as
is regex to describe the underlying theory.
²S. C. Kleene. (1956). Representation of Events in Nerve Nets and Finite Automata. In: Automata Studies, (AM-34), pp. 3–42. C. E.
Shannon and J. McCarthy (eds.). Princeton University Press. DOI: 10.1515/9781400882618-002.
³K. Thompson. (1968). Programming Techniques: Regular expression search algorithm. Commun. ACM, vol. 11, no. 6, pp. 419–422.
DOI: 10.1145/363347.363387.
295
Regex 101 296
as backreferences later in the pattern, and they’re also returned by the engine back into the
programming environment. Captures are also used to substitute a replacement into the input
string.
Regexes have limitations, of course. The idea of Garbage In, Garbage Out (GIGO) is relevant
here. The regex engine interprets the provided pattern sequentially, reading through the input
text and backtracking as necessary until no more matches are possible. It can’t observe the input
text as a whole or make decisions from a big picture perspective, as humans can. It’s therefore
prudent to make use of the programming environment to supplement your regex patterns. You
can find out more about this later.
PowerShell regex uses the .NET regex engine. Examples in this book can be applied to
other .NET languages (C#, VB.NET, F#, and ASP.NET).
Supported:
⁴Microsoft. (2020, Jun. 30). .NET regular expressions. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/dotnet/-
standard/base-types/regular-expressions. [Accessed: Jun. 12, 2021].
⁵J. E. F. Friedl. (2006). Mastering Regular Expressions. 3rd ed. Beijing [u.a.]: O’Reilly. ISBN: 978-0-596-52812-6.
Regex 101 297
This list is primarily for reference, and you can learn more about the topics described later.
This chapter covers the basics of regexes. If you already have an intermediate under-
standing of regexes, you can consider skipping ahead to the Accessing Regexes chapter.
However, there are lots of useful pointers, so reading through will help set you up for
later.
To introduce the fundamentals of regexes, consider this example. The days of the week in the
English language all end in ‘day’, with three to six preceding letters. You could simply match
each day individually.
Regex 101 298
False
False
False
False
True
False
False
This works because the regex engine interprets the letters of Friday and Saturday literally. To
avoid using the -match operator many times, you can use alternation. Think of this as a logical
OR. Regexes use the pipe | character for alternation.
True
True
False
True
Notice the backslash \ used with the w here. Backslashes are metacharacters in regexes—they
have a special meaning. They denote special tokens, including character classes. They can also
denote escape sequences.
There are also three inverse character classes matching any character that isn’t in that class.
Each uses the same letter but capitalized: \W \D \S.
False
True
True
The first pattern is looking for one not-space and one space, but the string contains no spaces.
The second pattern is looking for two not-spaces, which match ‘ab’. The third pattern is looking
for two word characters, a space, one not-decimal, and one decimal. This matches ‘ab c1’, as ‘c’
isn’t a decimal.
There are also escape sequences that match single characters, such as a newline. You can
find out about these later in the chapter.
Regex 101 300
False
True
True
Note the two single quotation marks '', which insert a single mark ' into a literal string. Observe
also that a period . inside a custom character class matches only a period, not any character.
Custom character classes are also capable of inversion. By inserting a caret ^ after the opening
square bracket [, the class matches any character not in it.
The following example uses the $Matches automatic variable. The next chapter, Access-
ing Regexes, discusses this further. For now, know that PowerShell sets $Matches each
time you call -match and it succeeds. The entry $Matches[0] reveals what the entire
pattern matched in the input string.
True
"quoted"
Example 6 matches one or more characters that aren’t quotation marks, surrounded by quotation
marks. The plus + is a quantifier meaning, match one or more instances of the last token.
⁶https://learn.microsoft.com/en-us/dotnet/standard/base-types/character-classes-in-regular-expressions
Regex 101 301
11.5 Quantifiers
Repeating literal characters or character classes for large match spans would take up a lot of space.
Regexes make use of quantifiers to control the number of allowed repetitions of the preceding
token in the pattern. You can rewrite the pattern from earlier, \w\w\w\wday, as \w{4}day. What
about a range of repeats?
False
True
This results in the general formula {min,max}. Don’t use a space character on either side of the
comma. You can omit max, leaving the comma (,) in place, to match min or more.
False
True
True
You can use the zero or one quantifier to make tokens optional.
Regex 101 302
True
True
True
absolutely
True
ably
True
absolutely lovely
The first match understandably succeeds, with .* matching ‘solute’. The second proves that
* can match zero times. What’s going on with number three, then? Wouldn’t it just match
‘absolutely’?
Going back to the idea of sequential processing, the engine performs the following steps:
Matching as much as possible, then giving back as needed, is greedy matching. This is the
default in PowerShell regexes. The opposite of greedy is lazy matching, where the quantifier
matches as little as possible and takes more if needed. You can make any quantifier lazy by
adding a question mark ? after it:
With this in mind, the lazy .*? fixes the last match in Example 10.
True
absolutely
The engine only matches ‘absolutely’ this time. Initially, .*? only matches the first ‘s’ but takes
more characters until the next token l matches the first ‘l’. This matches, but the following ‘u’
doesn’t. Therefore, the engine backtracks in the pattern, with .*? taking more characters until
the next token l matches the second ‘l’. The final token y then matches the last ‘y’ and processing
stops with a successful match.
The Regex Deep Dive chapter discusses backtracking and branching in more detail.
Quantifiers Reference
You can view a complete reference for quantifiers at Microsoft Docs⁷.
The following example uses a literal here-string. Use these to create multiline strings in
PowerShell. Expandable here-strings @" and "@ also exist.
⁷https://learn.microsoft.com/en-us/dotnet/standard/base-types/quantifiers-in-regular-expressions
Regex 101 304
False
True
Example 12 matches line endings for both Windows and Unix-like systems. This is useful when
working in PowerShell 6.0 and later, which is cross-platform. The first part of the pattern, \r?,
matches an optional single carriage return (0x0D). The second part, \n, matches a line feed
(0x0A).
You can escape character classes too. Escaping the backslash \ causes the engine to interpret it
literally. The sequence \\n matches a backslash followed by an ‘n’, not a line feed.
Another kind of zero-width assertion is the lookaround. This feature matches one or more regex
tokens in a subexpression while still being zero-width. The Regex Deep Dive chapter discusses
subexpressions further.
Anchors Reference
You can view a complete reference for regex anchors at Microsoft Docs⁹
11.8 Captures
The final topic that this chapter covers is the capturing subexpression, or capturing group. These
enable you to extract substrings from the input string and use brackets (parentheses) ( and ). You
can wrap any part of a regex pattern to make it a capturing subexpression. The engine assigns
it a number starting from 1, and if it makes a match, it returns the captured substring into the
programming environment. The next example extracts the original word from an infixed one.
The terms capturing group and capturing subexpression are often used interchangeably.
Technically, a subexpression is any part of a regex pattern delimited by grouping
constructs. Capturing subexpressions create capturing groups, which capture matches
made by the subexpressions within. To avoid confusion, this chapter uses the term group
for both the construct and the resulting group.
True
fan-flaming-tastic
fantastic
This pattern may seem a bit complicated but, when broken down, it’s straightforward.
• The word boundary \b anchors at each end force the engine to match no more or less than
the infixed word.
• The first (\w+) matches and captures the part of the word before the first hyphen.
• The -\w{6,}- matches an infixation with at least six letters. This eliminates most other
multi-word phrases like ‘up-to-date’, but not phrases with longer middle words, like ‘well-
thought-out’.
⁹https://learn.microsoft.com/en-us/dotnet/standard/base-types/anchors-in-regular-expressions
Regex 101 306
• The last (\w+) matches and captures the part of the word after the last hyphen.
Try the examples in this chapter in PowerShell and observe how changing the input or
pattern changes the output.
Experimentation is a great way to familiarize yourself with regexes.
The $Matches automatic variable stores these numbered captures as entries in a hashtable. Recall
that the zeroth entry $Matches[0] is the capture of the entire pattern. It’s also possible to group
regex tokens without capturing them. This is a non-capturing group and takes the form (?:...)
where ... is any regex subexpression.
The $Matches automatic variable isn’t nullified before each regex operation. This means
that if a match fails, $Matches will still contain the result of the last successful match.¹⁰
You should therefore check for match success before reading this variable.
¹⁰Microsoft. (2021, Oct. 27). About Automatic Variables (Microsoft.PowerShell.Core) - Matches. Microsoft Docs. [Online]. Avail-
able: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_automatic_variables#matches. [Ac-
cessed: Nov. 15, 2021].
Regex 101 307
Match 0:
Group 0 (Name = 0): <-- Entire match 'abcdef'
Capture 0 (pos 0) = abcdef <-- $0 or $& in replacement pattern
Group 1 (Name = first): <-- From subexpression (?<first>\w)
Capture 0 (pos 0) = a
Capture 1 (pos 2) = c
Capture 2 (pos 4) = e <-- $1 or ${first} in replacement pattern
Group 2 (Name = second): <-- From subexpression (?<second>\w)
Capture 0 (pos 1) = b
Capture 1 (pos 3) = d
Capture 2 (pos 5) = f <-- $2 or ${second} in replacement pattern
Match 1:
Group 0 (Name = 0): <-- Entire match 'hijklm'
Capture 0 (pos 8) = hijklm <-- $0 or $& in replacement pattern
Group 1 (Name = first): <-- (From subexpression (?<first>\w)
Capture 0 (pos 8) = h
Capture 1 (pos 10) = j
Capture 2 (pos 12) = l <-- $1 or ${first} in replacement pattern
Group 2 (Name = second): <-- From subexpression (?<second>\w)
Capture 0 (pos 9) = i
Capture 1 (pos 11) = k
Capture 2 (pos 13) = m <-- $2 or ${second} in replacement pattern
¹¹https://learn.microsoft.com/en-us/dotnet/standard/base-types/grouping-constructs-in-regular-expressions
12. Accessing Regexes
The Regex 101 chapter introduced the -match operator. There are, however, lots of ways to use
regexes in PowerShell. This chapter introduces you to them.
308
Accessing Regexes 309
yellow
green
In Example 2, only colors that have two repeated letters match, so the output array contains only
‘yellow’ and ‘green’.
¹Microsoft. (2017, Jan. 16). System.Management.Automation - tokenizer.cs. L657-L692. PowerShell/PowerShell on GitHub. [Online].
Available: https://github.com/PowerShell/PowerShell/blob/master/src/System.Management.Automation/engine/parser/tokenizer.cs. [Ac-
cessed: Jul. 10, 2021].
²Microsoft. (2016, Mar. 31). System.Management.Automation - InternalCommands.cs. L1699. PowerShell/PowerShell on GitHub.
[Online]. Available: https://github.com/PowerShell/PowerShell/blob/master/src/System.Management.Automation/engine/Internal-
Commands.cs. [Accessed: Jul. 04, 2021].
Accessing Regexes 310
The -replace operator also accepts a script block instead of a replacement pattern. This provides
for more advanced string manipulation.
Example 4: Replacing infixed capital letters with -replace and a script block
1 $MyString = 'This sentEnce has infixed cApital leTTErs.'
2
3 # Any matches are converted to lowercase
4 $Evaluator = {
5 Write-Host "Run on '$_'"
6 ([string]$_.Value).ToLower()
7 }
8
9 $MyString -creplace '(?!\A)\b[a-z]*[A-Z][A-Za-z]*\b', $Evaluator
Run on 'sentEnce'
Run on 'cApital'
Run on 'leTTErs'
This sentence has infixed capital letters.
Note that the engine populates the $_ automatic variable with the [Match] object for the whole
match, regardless of any capturing groups. You can learn more about [Match] objects in the
Using the .NET Methods section, later in this chapter.
The subexpression (?!\A) is a negative lookahead, a kind of lookaround. You can find out more
about lookarounds in the Regex Deep Dive chapter.
today
today
today
yesterday
Red
Green
Blue
To split using a regex pattern for the delimiter, place the operator after the string and follow it
with the pattern. The engine removes all parts of the string that match the delimiter pattern and
returns the intervening chunks as an array of strings. Use quantifiers if the delimiter may appear
more than once, as the zero-width gaps between them result in empty strings in the output array.
³https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_comparison_operators
Accessing Regexes 312
Red
Green
Blue
Cyan
Magenta
Yellow
Another interesting feature of using regexes with -split is the ability to capture all or part of
the delimiter, and include it in the substring output. Use capturing groups as you would in any
other regex pattern. The engine returns any captures as substrings, and there is no limit to the
number of captures. -split supports numbered and named captures, too.
Red
,
Green
Blue
.
Cyan
Example 8 uses alternation to capture the delimiter only if it’s a period ‘.’ or comma ‘,’. The
character class \p{P} matches all punctuation characters and is a Unicode category. You can
find out more about Unicode categories in the Regex Deep Dive chapter.
The example also reveals a little about how the regex engine processes matches. The \p{P}
category also includes periods and commas, but the first alternative [.,] matches first. The
engine, therefore, doesn’t need to try these delimiters against the second alternative and, since
the first alternative includes a capture, the engine returns them. With this in mind, it’s now clear
why the engine doesn’t return the colon ‘:’ between ‘Green’ and ‘Blue’. It doesn’t make a match,
so must try the second alternative. This does match but doesn’t include a capture.
Accessing Regexes 313
You can also limit the number of substrings returned by the -split operator. To achieve this,
follow the delimiter pattern with a comma and an integer. This isn’t an absolute limit on the
substrings returned, however. If passing an array of strings, the limit applies to each string
individually. If capturing all or part of the delimiter, these captures don’t count towards the
limit.
Red
Green
Blue/Cyan:Magenta
Red
,
Green
.
Blue/Cyan:Magenta
Red
Green
Blue
Cyan
Magenta
Example 9 demonstrates how passing an array of strings or capturing the delimiter affects the
substring limit. The max-substrings parameter only guarantees the number of substrings per
string that aren’t delimiters.
A new feature, available in PowerShell 7 and later, means -split accepts a negative integer for
the max-substrings parameter. Negative values invert the substring limit, and the engine applies
it in a last-to-first order.
Red,Green
Blue
Cyan
Besides supporting inline regex options (?msnix), the -split operator is special because it
supports some regex options. You must pass these flags by their names as a comma-separated
list within a string.
Red
Green
Blue
Red
,
Green
,
Blue
In Example 11, the engine ignores the unnamed capturing group because the ‘ExplicitCapture’
regex option causes only named groups to capture their contents. The first -split command,
therefore, returns no commas between substrings. An important point to note is that -split
is case-insensitive, and therefore the ‘IgnoreCase’ makes no difference. If using the -csplit
operator, however, ‘IgnoreCase’ or indeed the inline option (?i), would change the output. To
view the options by their names, use the [Enum]::GetNames() method:
[Enum]::GetNames([System.Management.Automation.SplitOptions])
The -split operator supports plain text matching, too, using the ‘SimpleMatch’ parameter.
This goes in the same place as the regex options would. The only other option available with
‘SimpleMatch’ mode is ‘IgnoreCase’, which enables case-insensitive matching with -csplit,
and is inconsequential with -split.
Red
Green
Blue/Cyan
Red
Green
Blue
Cyan
Split Reference
You can view a complete reference for the -split operator at Microsoft Docs⁴.
environments.⁵ ⁶ In its default mode, the cmdlet accepts regex patterns and returns [MatchInfo]
objects. Since it accepts pipeline input, you can use Select-String to filter large data, such as
logs. It also accepts one or more file paths as input, searching each file individually. In this mode,
the cmdlet prepends the name of the file where it found each match.
Example 13: Displaying only error and warning lines in a log file
1 $SampleLog = 'Sample.log'
2
3 @'
4 [2020-07-16T19:35:46] [DEBUG] Connections waiting on [1508]:443
5 [2020-07-16T19:42:24] [ERROR] Disk space critical on /dev/sdb2
6 [2020-07-16T20:20:52] [WARN ] Service [2412] stopped <1min post-run
7 [2020-07-16T20:21:09] [INFO ] Service [2896] stopped
8 [2020-07-16T20:25:26] [DEBUG] Service [2896] ready
9 [2021-03-22T21:20:06] [ERROR] Service [2952] stopped unexpectedly
10 [2021-03-22T21:20:23] [INFO ] Closed [1663]:443
11 '@ | Out-File $SampleLog
12
13 Get-Content $SampleLog | Select-String '\[(?:ERROR|WARN )\]'
Example 13 Output
Several features that change the matching mode for Select-String exist. Use the -NotMatch
parameter to invert pattern matches, as with the -notmatch operator. Use the -CaseSensitive
parameter to enable case-sensitive matching. It’s also possible to use any of the inline options
discussed in the Regex Deep Dive chapter.
When working with files via the -Path or -LiteralPath parameters, use the -List parameter
to show only the first match in each file. This is an efficient way to retrieve a list of files matching
your pattern at least once.⁷ By default, Select-String only matches the first occurrence on each
line. You can change this behavior with the -AllMatches parameter, and this works both with
files and pipeline input.
The -Raw parameter used in the following example causes plain string output of the matches,
instead of [MatchInfo] objects. To disable emphasis on the matches, but keep rich [MatchInfo]
output, use the -NoEmphasis parameter.
Example 14: Extracting errors from log files with a specific range of dates
1 $FilteredLog = 'SampleLog-Errors-2021-Q1.log'
2 $Pattern = '^\[2021-0[1-3].+? \[ERROR\]'
3 Select-String $Pattern -Path $SampleLog -Raw | Set-Content $FilteredLog
4
5 Get-Content $FilteredLog
Another useful feature is the -Context parameter. This displays the lines immediately before
and after the matching line.
Example 15 Output
In Example 15, the -Context parameter shows the log entries immediately before and after the
matched line.
As with many cmdlets that accept file input in PowerShell, Select-String also has -Include,
-Exclude, and -Encoding parameters. These let you filter the files searched and declare the
encoding used to read them. Select-String supports plain text matching, too, using the -
SimpleMatch parameter.
The Select-String cmdlet returns one [MatchInfo] object for each line in the input where it
made a match. The properties of this object contain lots of information. The Line property is a
plain string version of the matching line, while LineNumber, Path, and Context tell you about
the location of the matches. The Context property contains a [MatchInfoContext] structure
with information about the adjacent lines, but only if you used the -Context parameter.
Finally, the Matches property contains an array of all the underlying regex [Match] objects that
the [MatchInfo] used for the line. You can learn more about [Match] objects in the second half
of this chapter, Using the .NET Methods.
Accessing Regexes 318
Select-String Reference
You can view a complete reference for Select-String at Microsoft Docs⁸.
Like with regex operators, there are case-sensitive and inverse parameters, too. These are -
IMatch, -CMatch, -NotMatch, -INotMatch, and -CNotMatch.
Where-Object Reference
You can view a complete reference for Where-Object at Microsoft Docs⁹.
Like with the operators and Select-String, the matches are case-insensitive by default. To use
case-sensitive matching, pass the -CaseSensitive switch along with -Regex.
Switch Reference
You can view a complete reference for the switch statement at Microsoft Docs¹⁰.
⁹https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/where-object
¹⁰https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_switch
Accessing Regexes 320
You can specify regex options and a custom error message, too.
1 [ValidatePattern(
2 '^\w+\b\s*\b\w+$',
3 Options = [System.Text.RegularExpressions.RegexOptions]::IgnoreCase -bor
4 [System.Text.RegularExpressions.RegexOptions]::Multiline,
5 ErrorMessage = 'The text "{0}" failed to match pattern "{1}"'
6 )]
You can learn more about Regex Options later in the chapter.
ValidatePattern Reference
You can view a complete reference for the ValidatePattern() attribute at Microsoft
Docs¹¹.
...
...
These assertions work with the -Not parameter, too. Should -Not -Match -and Should -Not
-MatchExactly produce inverse test results.
A few important points to consider when using the .NET methods are:
• Each method has several overloads. This chapter doesn’t cover them all, but links to the
comprehensive Microsoft documentation are available at the end of the section.
¹⁴https://pester.dev/docs/assertions/assertions
Accessing Regexes 322
• None of the methods accept string arrays, so you must handle them programmatically.
• All the methods that this section describes accept [RegexOptions]¹⁵ bitwise flags and, for
recent .NET distributions,¹⁶ a match time-out as a [TimeSpan]¹⁷. The Microsoft documen-
tation describes which overloads support these.
• The methods are case-sensitive by default, unlike the native PowerShell operators.
To use case-insensitive matching, you must use the (?i) inline option or
[RegexOptions]::IgnoreCase.
12.2.1 Constructors
You can initialize regex class instances with New-Object or the ::new() constructor.
When creating regex class instances, you must pass a string pattern. You can optionally pass
[RegexOptions] flags and a time-out.
Example 22: Initializing regex class instances with options and time-out
1 # Options equivalent to (?mi)
2 # Note the bitwise or operator used to combine options
3 $MyRegexOpts =
4 [System.Text.RegularExpressions.RegexOptions]::Multiline -bor
5 [System.Text.RegularExpressions.RegexOptions]::IgnoreCase
6
7 # 500 ms match time-out
8 $MyRegexTimeout = [timespan]::FromMilliseconds(500)
9
10 $MyRegex = [regex]::new(
11 '\[(?:error|warn *)\]', $MyRegexOpts, $MyRegexTimeout
12 )
13
14 $MyRegex
12.2.2 IsMatch()
This method, similar to -match, returns a boolean value that indicates whether the pattern
matched the input. Unlike -match, however, it doesn’t capture any of the input if successful.
¹⁵https://learn.microsoft.com/en-us/dotnet/api/system.text.regularexpressions.regexoptions
¹⁶All .NET Core versions, all .NET versions (5.0+), and .NET Framework versions 4.5+.
¹⁷https://learn.microsoft.com/en-us/dotnet/api/system.timespan
Accessing Regexes 323
True
True
12.2.3 Match()
Unlike IsMatch(), this method collects match data if successful. This completes the missing
functionality found in the -match operator and provides a more comprehensive data structure.
Match() searches for the first pattern match in the text and returns a [Match]¹⁸ object. To
determine if the match was successful, use the Success boolean property.
A Group instance populates itself with, and is functionally equivalent to, the last
Capture of that group in the string: Group.Captures[Group.Captures.Count - 1].
This becomes relevant when there are many capturing groups in your pattern.
Perhaps the most important property provided by the [Match] object is Groups. This is a
collection of all groups matched by the regex.
¹⁸https://learn.microsoft.com/en-us/dotnet/api/system.text.regularexpressions.match
Accessing Regexes 324
True
Example 25: Using the NextMatch() method to match all repeated letters
1 $MyMatch = [regex]::Match('aabccdde', '(?i)(\w)\1')
2
3 while ($MyMatch.Success) {
4 'Found repeat: {0}' -f $MyMatch.Value
5 $MyMatch = $MyMatch.NextMatch()
6 }
Found repeat: aa
Found repeat: cc
Found repeat: dd
12.2.5 Matches()
You can also look for all matches at once using the Matches() method and iterate over the
resulting [MatchCollection]¹⁹. Each item in the collection is a [Match] object.
¹⁹https://learn.microsoft.com/en-us/dotnet/api/system.text.regularexpressions.matchcollection
Accessing Regexes 325
Example 26: Using the Matches() method to match all repeated letters
1 $MyMatches = [regex]::Matches('aabccdde', '(?i)(\w)\1')
2
3 foreach ($match in $MyMatches) {
4 'Found repeat: {0}' -f $match.Value
5 }
Found repeat: aa
Found repeat: cc
Found repeat: dd
Matches() uses lazy evaluation to populate the [MatchCollection] by default. This avoids
expensive operations for complex patterns or many matches. However, if you attempt to
access properties of the collection, such as Count, the engine populates all possible matches
immediately.²⁰ You should therefore aim to use iterative statements such as foreach to process
the result from a Matches() call.
12.2.6 Replace()
The Replace() method is similar in behavior to the -replace operator. It accepts several extra
parameters, however, and these make it powerful for text processing.
Example 27: Using the Replace() method to replace lowercase ‘m’ words.
1 $MyString = 'May: The sunshine is mellow.'
2
3 [regex]::Replace($MyString, '\bm(\w+)\b', 'y$1')
Once again observe that the .NET methods are case-sensitive by default—the word ‘May’ isn’t
replaced, unlike with -replace in Example 3.
One advantage with Replace() is the ability to limit the number of replacements, and set a start
point for them. You can achieve this with two more integer parameters, count and startAt, and
they only work when Replace() is an instance method.
²⁰Microsoft. (2021, Apr. 06). MatchCollection Class (System.Text.RegularExpressions). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/dotnet/api/system.text.regularexpressions.matchcollection#remarks. [Accessed: Nov. 15, 2021].
Accessing Regexes 326
Example 28: Using Replace() with a max count and offset to change a log format
1 $MyString = '[21:20:06] [ERROR] Service [2952] stopped unexpectedly'
2
3 $MyRegex = [regex]::new('\[(\w+)\]')
4
5 # Max 1 replacement, starting at position 1 (2nd character)
6 # Skips first square brackets as this begins at position 0
7 $MyRegex.Replace($MyString, '$1:', 1, 1)
These parameters are also available, to varying degrees, in other .NET regex methods.
Much like you can pass a script block to the -replace operator, the Matches() method accepts
a delegate method to evaluate matches. In C#, this comes as a MatchEvaluator class instance,
but you can cast a PowerShell script block to this type, and can therefore treat it similarly to
-replace. The major difference is the lack of the automatic variable $_ for the match, so you
must define your script block with a parameter to receive the match. This example uses the same
pattern as Example 4 to remove infixed capital letters for a limited number of words.
Run on 'sentEnce'
Run on 'cApital'
This sentence has infixed capital leTTErs.
Replace() ignores the final word with infixed capitals (‘leTTErs’) as the call specifies a
maximum of two replacements from position zero.
12.2.7 Split()
You can use the Split() method similarly to the -split operator, too. The advantage of the
.NET method, however, is access to the same count and startAt parameters as Match(). This
lets you decide the maximum number of splits and where in the input string to start the search.
Unlike with -split, you can’t pass a script block evaluator to Split().
The Unescape() method isn’t as useful for dynamic regex patterns. However, it’s a useful tool
for restoring escaped patterns and converting regex Unicode sequences into printable text.
Accessing Regexes 329
[a-b]
Hello
world
©
0
Octets
0
1
Octet 1 = 198
Octet 2 = 51
Octet 3 = 100
Octet 4 = 193
In Example 33, the pattern uses the group name ‘Octets’ twice. Many regex implementations
disallow this, but .NET and PowerShell permit it. The advantage here is that you can append
captures from another group to those of an existing one. The example shows this, permitting a
single iterative statement to retrieve all four decimal octets of the IPv4 address. Reusing a group
name removes access to the original group, therefore backreferences to and replacements with
the original definition aren’t possible.
Accessing Regexes 330
Example 34: Retrieving group names and indices from the other
1 $MyRegex.GroupNumberFromName('Octets')
2
3 $MyRegex.GroupNameFromNumber(1)
Octets
Group names are strings and group indices are 32-bit integers.
[Enum]::GetNames([System.Text.RegularExpressions.RegexOptions])
Each heading below includes the enumeration for the regex option in parentheses.
False
True
False
True
You can use the string name of the RegexOptions enumeration for simpler, cleaner code.
None:
Name Offset Value
---- ------ -----
0 0 C:\Program Files\PowerShell\7\Modules
1 0 C:
2 2 \Program Files
2 16 \PowerShell
2 27 \7
2 29 \Modules
dir 3 Program Files
dir 17 PowerShell
dir 28 7
dir 30 Modules
ExplicitCapture:
Name Offset Value
---- ------ -----
0 0 C:\Program Files\PowerShell\7\Modules
dir 3 Program Files
dir 17 PowerShell
dir 28 7
dir 30 Modules
option can improve performance by compiling the sequence into Common Intermediate Lan-
guage (CIL) bytecode.²³ The runtime can then directly execute the instructions.²⁴
The caveat to this approach is that compilation is expensive. Compiled regexes improve runtime
performance at the cost of initialization. If you intend to use a regex lots of times, you may find
performance improvements with the Compiled option.
It’s also possible to use the Compiled option with static regex methods. The regex engine caches
the patterns you use in static method calls. This means that the Compiled option can offer a
performance improvement for static method calls, too. This option causes the engine to cache
the compiled CIL bytecode instead of the interpreted opcodes.
You can get or set the static cache size for the engine with [regex]::CacheSize.
False
True
Not passing any value to the regex options parameter is equivalent to passing
None. This parameter is also case-insensitive when passing a string instead of
[System.Text.RegularExpressions.RegexOptions].
²³Common Intermediate Language (CIL) was originally called Microsoft Intermediate Language (MSIL) before the Common Language
Infrastructure (CLI) was standardized.
²⁴Microsoft. (2021, Sep. 15). Regular expression options - Compiled regular expressions. Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/dotnet/standard/base-types/regular-expression-options#compiled-regular-expressions. [Accessed: Jan.
10, 2022].
Accessing Regexes 334
True
True
There are some limitations to this mode. You can’t put white space between characters that make
up language elements, including:
• Quantifiers {min,max}
• Group initializers (?<name>, (?:, (?<!, (?# etc.
• Unicode classes \p{name}
• Backslash escapes \...
Accessing Regexes 335
Example 40: Matching from the end of the string wth RightToLeft
1 $Text = 'a1b1c1a2b2c2'
2 $TwoChars = '.{2}'
3 $Greedy = '(?<L>.+)(?<R>.+)'
4
5 Write-Host (
6 'First pair, no options: ' +
7 [regex]::Match($Text, $TwoChars, 'None').Value
8 )
9
10 Write-Host (
11 'First pair, right-to-left: ' +
12 [regex]::Match($Text, $TwoChars, 'RightToLeft').Value
13 )
14
15 $GreedyLtr = [regex]::Match($Text, $Greedy, 'None')
16 Write-Host (
17 'Greedy sharing, no options:' + [Environment]::NewLine +
18 ' $1 = ' + $GreedyLtr.Groups[1].Value + [Environment]::NewLine +
19 ' L = ' + $GreedyLtr.Groups['L'].Value + [Environment]::NewLine +
20 ' $2 = ' + $GreedyLtr.Groups[2].Value + [Environment]::NewLine +
21 ' R = ' + $GreedyLtr.Groups['R'].Value
22 )
23
24 $GreedyRtl = [regex]::Match($Text, $Greedy, 'RightToLeft')
25 Write-Host (
26 'Greedy sharing, right-to-left:' + [Environment]::NewLine +
27 ' $1 = ' + $GreedyRtl.Groups[1].Value + [Environment]::NewLine +
28 ' L = ' + $GreedyRtl.Groups['L'].Value + [Environment]::NewLine +
29 ' $2 = ' + $GreedyRtl.Groups[2].Value + [Environment]::NewLine +
30 ' R = ' + $GreedyRtl.Groups['R'].Value
31 )
Accessing Regexes 336
The engine processes both the tokens and the string in last-to-first order. Since both are reversed,
matches are still coherent. For instance, in Example 40, the RightToLeft match is ‘c2’ as found
in the string, not ‘2c’.
Because the engine is searching from the end of the string to the beginning, a greedy token
later in your pattern matches first. Therefore, two greedy tokens capable of matching the same
characters will share these differently in RightToLeft mode.
The index assignment for captures is still left-to-right, however. The left L group receives a
group number of 1, and the R group, 2, in both modes. Lookarounds don’t change their direction,
either. A lookahead will always match characters after it, and a lookbehind will always match
characters before it. The final important point is that the start index, available in some .NET
methods, is still an offset from the start of the string. The difference is that the engine searches
backwards from this index in RightToLeft mode, so it produces different behavior.
In contrast with many regex implementations, .NET regex supports Unicode matching in its
default state. This isn’t the case in ECMAScript mode, and several character classes differ in
what they match:
Usually, when a single decimal digit follows a backslash, it’s always interpreted as a backref-
erence. If a capture with the numeric name doesn’t exist, the engine throws an exception. In
ECMAScript mode, a nonexistent capture results in the interpretation of a literal digit, instead.
For more than one digit following a backslash, the engine interprets a decimal backreference. If a
capture with that index doesn’t exist, it assumes an octal character code up to \377, with trailing
digits interpreted literally. In ECMAScript mode, the engine attempts to find a backreference with
as many octal digits as possible and convert them to decimal. If this doesn’t exist, it assumes an
octal character code up to \377, with trailing digits interpreted literally.
In ECMAScript mode, the engine updates any captures that include backreferences to themselves
on each iteration. This enables a self-backreference to match part of a capture in the first iteration
of the capture.
The following example shows the difference in behavior between ECMAScript mode and the
canonical regex mode.
NORM, 1:
ECMA, 1: 1
NORM, 11:
ECMA, 11: 1
NORM, 111:
ECMA, 111: 111
can be problematic. The invariant culture circumvents this, using a predetermined character set
indifferent to the current culture.
IgnoreCase, Multiline
IgnoreCase, Multiline
Multiline
IgnoreCase, Multiline
Accessing Regexes 339
²⁷https://learn.microsoft.com/en-us/dotnet/standard/base-types/regular-expression-options
13. Regex Deep Dive
After reading Accessing Regexes, you should have a general understanding of how to use regexes
in PowerShell. This chapter takes you deeper into the topic, with advanced syntax, replacement
patterns, and debugging.
When you see the Invalid pattern error, it means there’s something wrong with your regex.
Common causes of invalid patterns include:
• Not escaping metacharacters such as brackets ()[], anchors ^$, quantifiers ?*+, or the
backslash \.
• Not closing a group with ) or character class with ].
• Referencing a nonexistent capturing group name or index (a).+\2.
• Reversing a range reference in a character class [z-a].
• Using an escape where you don’t need one he\x digits.
• Not escaping special PowerShell characters in expandable strings "\$(\d)".
However, not all mistakes cause invalid patterns. Many erroneous patterns are valid regexes but
don’t behave as intended.
340
Regex Deep Dive 341
False
True
12.99
In Example 2, an intended literal dollar sign $ isn’t escaped, and the engine interprets this as an
end of string anchor. This is still a valid pattern, but it’ll never succeed because the tokens after
the $ are unable to match the pattern. Therefore, it’s important to think from the perspective of
the regex engine to construct effective patterns.
Other than those previously mentioned, causes of valid but erroneous patterns include:
• Not escaping tokens with special meanings in regex, for example $3.50 (anchor)
• Not escaping special PowerShell characters in expandable strings, for example "\$3.50"
($3 PowerShell variable)
• Trying to escape special PowerShell characters in literal strings, for example '\`$(\d)'
(literal backtick in regex pattern)
• Inserting commas between ranges in a character class, for example [a-z,0-9]
• Not using the correct capitalization when case-sensitivity is on
• Unintentionally inserting white space into a pattern
• Not considering line breaks and their interactions with multiline mode
• Zero-length matches (zero-or-more quantifier within alternation)
• Confusing the numerical order of capturing groups
Watch a regex engine step through the pattern and string in Example 2 from the
Regex 101 chapter at the Regex 101 Website². This uses a different regex engine but
is sufficiently similar for this example. At each step, the debugger highlights the current
token in the regex pattern, along with the current match.
¹Microsoft. (2021, Sep. 15). Details of regular expression behavior. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-
us/dotnet/standard/base-types/details-of-regular-expression-behavior. [Accessed: Jan. 10, 2022].
²https://regex101.com/debugger?flags=i&flavor=pcre2®ex=Monday%7CTuesday%7CWednesday%7CThursday%7CFriday%
7CSaturday%7CSunday&testString=It%20rained%20on%20Friday,%20but%20Monday%20will%20be%20clear.
Regex Deep Dive 342
The regex engine steps through $MyPattern one token at a time, attempting to match each to
the current target in the input text.
1. The engine finds no matches for any alternatives on the first character ‘I’, so it backtracks
to the beginning of the pattern and moves on.
2. When the engine reaches the second character ‘t’, it matches T in the Tuesday alternative,
as the -match operator is case-insensitive.
3. The next (third) character, a space (0x20), doesn’t match u in Tuesday, however, so the
engine continues to the other alternatives.
4. The same happens when the engine tries the Thursday alternative.
5. The engine finds no matches for any alternatives on the following characters (3 to 13) ‘
rained on ‘ so backtracks to the beginning of the pattern for each.
6. When the engine reaches the ‘F’ in ‘Friday’, it matches the F in the Friday alternative, and
so it tries the next element r against the next character ‘r’.
7. This matches too, and the engine continues matching elements to characters until Friday
in the pattern matches ‘Friday’ in $MyString.
8. The -match operator stops at the first match, so the engine stops and returns $true.
Backtracking occurs in this scenario because the pattern contains an alternation construct (which
is a fancy way to say “either/or patterns”). These decisions are the basis of backtracking in NFA
engines. Unlike a DFA engine, which tracks all matches as it moves along the text, an NFA engine
processes each token sequentially and remembers these decision points (backtracking positions).
If the pattern can’t match at a later point, the engine will backtrack to this point and try the next
decision.³
This effectively creates a new branch of possibilities, and the engine follows this branch until
either it makes a match or reaches the end of the pattern. The engine only gives up when it has
exhausted these avenues by trying each decision in turn. This way, it tries every permutation at
least once. You may think that this has the potential to take a lot of time. You’d be right.
'87db9d39-ddc8-413c-84ac-0be925a8230a'
True
'87db9d39-ddc8-413c-84ac-0be925a8230a '
MethodInvocationException: Exception calling "IsMatch" with "4" argument(s):
"The RegEx engine has timed out while trying to match a pattern to an input
string. This can occur for many reasons, including very large inputs or
excessive backtracking caused by nested quantifiers, back-references and
other factors."
The RegEx engine has timed out while trying to match a pattern to an input string.
When the input string is a valid GUID, the engine finds a complete match in less than a
millisecond. However, an errant space changes things, and the 5-second time-out has to save
the day.
The issue here is that the match fails. When this happens, the nested one-or-more + quantifiers
cause catastrophic backtracking. Consider what happens when the engine has matched all but
the space. In the examples below, the caret ^ symbol represents the current match target of the
engine, and parentheses () represent match components of the outer group with its + quantifier.
'(87db9d39-)(ddc8-)(413c-)(84ac-)(0be925a8230a) '
1 2 3 4 5 ^
The next token in the pattern is the end-of-string \Z anchor. Instead, the next character of the
input string is a space. The match has failed, so the engine backtracks. The inner + quantifier,
which was greedy and initially matched the entire hexadecimal block, gives up one character
from its fifth match component (the last ‘a’).
Regex Deep Dive 344
'(87db9d39-)(ddc8-)(413c-)(84ac-)(0be925a8230)a '
1 2 3 4 5 ^
However, the entire group can match one or more hexadecimal digits, so it does.
'(87db9d39-)(ddc8-)(413c-)(84ac-)(0be925a8230)(a) '
1 2 3 4 5 6 ^
Can you see where this is going? The engine is now effectively back where it was before, but with
another permutation of match components. The fifth match component of the inner + quantifier
gives up another character. These two unmatched characters now match as a repeat of the outer
+ quantifier.
'(87db9d39-)(ddc8-)(413c-)(84ac-)(0be925a823)(0a) '
1 2 3 4 5 6 ^
Yet again, the match fails at the space character. This time, however, the last match component
can also give up part of its match. The outer + quantifier can then match the final two characters
in a separate component. Therefore, two new permutations have emerged.
'(87db9d39-)(ddc8-)(413c-)(84ac-)(0be925a823)(0)(a) '
1 2 3 4 5 6 7 ^
When the fifth match component gives up another character, there are now four new permuta-
tions.
'(87db9d39-)(ddc8-)(413c-)(84ac-)(0be925a82)(30a) '
1 2 3 4 5 6 ^
'(87db9d39-)(ddc8-)(413c-)(84ac-)(0be925a82)(30)(a) '
1 2 3 4 5 6 7 ^
'(87db9d39-)(ddc8-)(413c-)(84ac-)(0be925a82)(3)(0a) '
1 2 3 4 5 6 7 ^
'(87db9d39-)(ddc8-)(413c-)(84ac-)(0be925a82)(3)(0)(a) '
1 2 3 4 5 6 7 8 ^
When the fifth match component gives up another character (‘2’), it creates another eight
permutations. Each character given up doubles the number of new permutations for match
components in 2n fashion. Apply this logic to the whole input string, and it’s easy to see why
the engine ran out of time.
Watch a regex engine step through this pattern without the space⁴ and with the space⁵
at the Regex 101 website.
Catastrophic backtracking is often the result of nested quantifiers. By thinking about what
happens when a match fails, you can spot these scenarios and correct for them. So, what’s the
fix in this case?
⁴https://regex101.com/debugger?flags=i&flavor=pcre2®ex=(%5B0-9a-f%5D%2B-%3F)%2B%5CZ&testString=87db9d39-ddc8-413c-
84ac-0be925a8230a
⁵https://regex101.com/debugger?flags=i&flavor=pcre2®ex=(%5B0-9a-f%5D%2B-%3F)%2B%5CZ&testString=87db9d39-ddc8-413c-
84ac-0be925a8230a%20
Regex Deep Dive 345
'87db9d39-ddc8-413c-84ac-0be925a8230a'
True
'87db9d39-ddc8-413c-84ac-0be925a8230a '
False
This time, when the engine can’t match the space in the second string, none of the match
components of the outer + quantifier give up characters. This only leaves stepping through the
input string and attempting to start the match at later positions. The engine soon runs out of
permutations to try, and processing stops.
Watch a regex engine step through this pattern with the trailing space at Regex 101⁶.
13.2.1 No Subroutines
A subroutine in regex is the ability to reuse a subexpression at a different point in the input string.
This differs from backreferencing, which only reuses the capture from a capturing subexpression.
⁶https://regex101.com/debugger?flags=i&flavor=pcre2®ex=(%3F%3E%5B0-9a-f%5D%2B-%3F)%2B%5CZ&testString=87db9d39-
ddc8-413c-84ac-0be925a8230a%20
Regex Deep Dive 346
Subroutines can significantly shorten patterns by reusing repetitive elements. .NET regex doesn’t
have this capability, as is clear with the repetition in Example 33 from the Accessing Regexes
chapter.
13.2.2 No Recursion
Recursive matching, available in some regex implementations, makes it possible to “step into” a
new recursion level. Here, the engine reapplies either the entire pattern or a captured group at the
current point in the input string. If this level reaches the recursive token again, the engine steps
into a deeper recursion level. This functionality adds a lot of possibilities for regex matching,
at the cost of simplicity and efficiency. .NET regex doesn’t support recursion but offers an
alternative solution with balancing groups. You can learn more about balancing groups in the
Advanced Subexpressions and Backreferences section of this chapter.
– Break long sequences of tokens into individual lines with the same indentation
Example 5 shows the pattern from Example 33 of the Accessing Regexes chapter,
rewritten in extended form. You could use this string as a regex pattern, so long as
you pass the extended/ignore pattern white space flag, either inline as (?x) or with
[RegexOptions]::IgnorePatternWhitespace. Example 39 from Accessing Regexes used
this feature.
Once you’ve deconstructed your pattern, you can interpret it. As in Example 5, it’s helpful to
annotate each token or group of tokens. This gives you a set of human-readable instructions to
step through. Think about how the engine is going to interpret each token from start to finish,
and how different inputs could change this.
The engine tries alternatives in the order they appear in a pattern. The third alternative in the
“Octets” group, which matches an optional 1/0 and two digits from zero through nine, can match
the first two digits of blocks starting with 2xx, with the [0-9]{1,2} token. If this alternative
appeared first, it could match 2xx numbers incompletely, leaving the last digit unmatched. For
the first three blocks, a mandatory period character must follow. This would cause the match to
Regex Deep Dive 348
fail and the engine would backtrack, trying the other alternatives and leading to the expected
behavior. For the fourth block, however, the match of only two characters could stand, but only
if no anchors or lookaheads were checking for further digits. In Example 5, the end of string
anchor $ prevents this unexpected behavior. If the pattern needed to extract IP addresses from
the middle of a string, however, a different approach is necessary.
When using alternatives, consider how you order them. You can also use lookarounds or anchors
to ensure an alternative is specific to the target match. Adding a negative lookahead for an extra
digit (?![0-9]) or a word boundary \b prevents the last block matching when extracting an IP
address from a sentence.
Notice that for the final IP address, no anchor results from the last octet being ‘25’. Both a word
boundary and a negative lookahead prevent this.
numeric systems. Matching Unicode characters directly is also possible with the \uHHHH escape,
where HHHH is the hexadecimal UTF-16 code point (big-endian). For example, \u00A9 matches
the copyright © symbol (U+00A9) and \u2021 matches the double dagger ‡ symbol (U+2021).
You can match any character encoded by UTF-16 in the Basic Multilingual Plane (BMP, U+0000–
U+FFFF).
You can also match Unicode blocks and categories with the \p{...} escape. To match a block,
use \p{Is...} where … is a block name. For example, \p{IsLatin-1Supplement} matches all
characters in the Latin-1 Supplement block (U+0080–U+00FF), including the copyright symbol.
To match a category, use \p{X} or \p{Xy} where X and Xy are category and subcategory
shorthands. For example, \p{Po} matches characters from the Other Punctuation subcategory,
including the double dagger symbol. By extension, \p{P}, which matches characters from the
Punctuation category, also matches the double dagger.
True
True
True
True
True
As with the word \w, decimal \d, and white space \s class shorthands, the Unicode class
shorthand has an inverse \P{...}, which matches all but that category or block.
13.4.1.1 UTF-16
One important consideration when you are working with Unicode characters is that PowerShell
and .NET operate with UTF-16 encoding. This applies to the regex engine and matching, too.
The engine stores characters beyond the Basic Multilingual Plane (BMP, U+0000–U+FFFF) as
UTF-16 surrogate pairs. These are special Unicode code points enabling two 16-bit code points
to represent one beyond U+FFFF. Any Unicode code point that isn’t a surrogate is known as a
scalar. Each surrogate pair consists of:
To determine the code point from a surrogate pair, the encoding system gets the high index
from the high surrogate character by subtracting 0xD800, multiplying by 1024 (0x400), and
adding 0x100000. This represents the code point range U+100000–U+10FC00 in steps of 1024.
The encoding system then gets the low index from the low surrogate by subtracting 0xDC00.
This represents the additional range 0x0–0x3FF (0–1023). By adding these two values together,
any code points in the range U+100000–U+10FFFF are possible.
When attempting to make a match, the regex engine will not examine the Unicode code point,
instead it will examine the two UTF-16 surrogates. This means you can’t use regex patterns for
surrogate pairs in the way you would for BMP characters.
abcdefghijklmnop
abcdefghijk
abcefghijk
The range operator, .., only works with characters beginning in PowerShell 6.0. In prior
versions, the range operator only works with integers.
• Positive subtractions -[...] remove characters or ranges from the initial class.
• Inverse subtractions -[^...] remove all characters or ranges from the initial class except
those in the subtraction class.
• You can subtract within a subtraction (nested subtractions work).
• You can’t add characters that aren’t in the initial class by including them in the subtraction
(double-negatives don’t work).
• The engine ignores characters or ranges not in the initial class (no errors for out-of-range
subtractions).
⁷https://learn.microsoft.com/en-us/dotnet/standard/base-types/character-classes-in-regular-expressions#SupportedNamedBlocks
⁸https://learn.microsoft.com/en-us/dotnet/standard/base-types/character-classes-in-regular-expressions#
SupportedUnicodeGeneralCategories
⁹https://learn.microsoft.com/en-us/dotnet/standard/base-types/character-encoding-introduction
Regex Deep Dive 352
abcdefg
jk
acdhijk
There are two ways to apply these options to your patterns. The first applies them to the rest of
the pattern after an option modifier. Turn an option on with a letter from Table 1, and off with
a hyphen (minus sign) before that letter. The general form for this is (?msnix-msnix).
¹⁰https://learn.microsoft.com/en-us/dotnet/standard/base-types/character-classes-in-regular-expressions#character-class-
subtraction-base_group---excluded_group
Regex Deep Dive 353
en-US
en-us
en-GB
In Example 10, the regex option flags turn on IgnoreCase, as it would be with the -match
operator. However, the -i modifier overrides this and disables case insensitivity within the
pattern. The m modifier turns on multiline mode, allowing the pattern to match a culture code
from each line, not just the one immediately following the beginning of the string.
The modifier takes effect at its current position in the pattern, so the modifier doesn’t alter the
behavior of any tokens that come before it.
0 abc
1 a
2 c
In Example 11, the engine doesn’t capture the second group but captures the first and third.
Explicit capture is only on between (?n) and (?-n), meaning the engine can capture the other
unnamed groups.
0 abc
1 b
In Example 12, ExplicitCapture is present, but the engine captures the second unnamed group.
This is because the group is inside a subexpression with ExplicitCapture turned off.
Name Value
---- -----
2 f
1 e
0 abcdef
Remember that with -match, you only get the first match. You only have access to the last
capture for each group, too. In Example 13, the captures for ab and cd aren’t accessible. The
second match for hijklmn and its captures isn’t, either.
When the IgnorePatternWhitespace flag is present, you can use end of line comments. The
engine ignores all text after a hash symbol # until the end of the line.
¹¹https://learn.microsoft.com/en-us/dotnet/standard/base-types/regular-expression-options#specifying-the-options
Regex Deep Dive 355
Note that in the pattern, the until month group is the third capturing group from left-
to-right, but is actually $2 in the replacement pattern. This is because the regex engine
only assigns numeric indexes to named captures after all unnamed ones.
In Example 16, the $$ token inserts a literal dollar sign. The protected numeric capture reference
${NNN} prevents the engine from interpreting any following digits as part of the group index.
For instance, the replacement pattern $10 represents the tenth capture, but ${1}0 represents the
first capture, followed by a literal zero.
Noice ice!
NNNNoice ice!!!!
NNNNoice ice!!!!
You must escape the backtick in expandable strings, and the single quotation mark in literal
strings. You can achieve both escapes by doubling the relevant character.
Regex Deep Dive 358
"Smith, John"
"Doe, Jane Luisa"
"Bloggs, Joe D."
"Smith-Bloggs, Joe"
In Example 19, note how the last name is capture 2, and the first name is capture 3. This is for the
same reason as in Example 16. The .NET regex engine indexes named captures after all unnamed
ones. The first name group is therefore the last capture, and accessed with ${First}, $3 or $+.
input string ‘15%’, \1 would match ‘15’ and \2 would match ‘%’. For named backreferences, use
\k<name> or \k'name' where name is the group name (?<name>...).
The engine gives all captures a numeric index regardless of whether the group is named, and
you can access these with \NNN or \k<...>. When accessing a nonexistent backreference using a
numeric index, the engine will treat it as an octal character code instead. The Accessing Regexes
section discusses backreference interpretation under the ECMAScript Mode heading.
True
False
True
True
True
True
True
True
If you use a numeric name (?<NNN>) for your capturing group, it overrides the internally
assigned index. Doing this means the capture names will no longer reflect the order of capturing
groups in the pattern. You probably shouldn’t do this.
Regex Deep Dive 360
False
True
Capturing group names are always case-sensitive. This applies whether the IgnoreCase flag is
present (such as with -match) or not.
The function of the negative lookahead here is to exclude all lines beginning with zero or more
space characters followed by a hash symbol #. All other lines match the .+ token, as do any
following newlines. The -replace operator replaces these matches, leaving only the comment
lines and adjacent newlines.
You can also use anchors in lookarounds, as they’re already zero-width. You can negate a start-
of-line anchor (?!^) in multiline mode, for example.
¹²https://learn.microsoft.com/en-us/dotnet/standard/base-types/backreference-constructs-in-regular-expressions
Regex Deep Dive 361
Match "cat"
article = The
noun = cat
Match "mat"
article = the
noun = mat
Match "floor"
article = the
noun = floor
Don’t get spooked! This pattern is a lot simpler than it looks. Breaking it down:
• ‘a hat’ doesn’t match because the ‘t’ in ‘hat’ is the end of the line. The negative lookahead
(?!$) matches this and causes failure.
• ‘The knitting’ doesn’t match because ‘knitting’ ends with ‘ing’. The negative lookbehind
(?<!ing) matches this and causes failure.
Regex Deep Dive 362
Notice that the overall match values (‘cat’, ‘mat’, ‘floor’) don’t include the articles. This is because
the tokens that match the article are inside a lookbehind, which is zero-width. However, the
capturing group ‘noun’ could still capture these as the engine processed the lookbehind. The
ability to capture text without consuming it, coupled with lookaround size variability, provides
a lot of room for creative regexes.
Lookarounds Reference
You can view a complete reference for lookarounds on the Grouping Constructs¹³ page
at Microsoft Docs.
17 }
18 Write-Host (
19 ' Raw attributes: [' + $Match.Groups['attribs'].Value + ']'
20 )
21 }
22
23 Write-Host 'With Id:'
24 & $ProccessMatches -Match $MyRegex.Match($WithId)
25
26 Write-Host 'Without Id:'
27 & $ProccessMatches -Match $MyRegex.Match($WithoutId)
With Id:
Attributes:
id = some-id
class = styleA, styleB
Raw attributes: []
Without Id:
Attributes:
Raw attributes: [ class="styleA styleB"]
Try deconstructing the pattern from Example 24 yourself. See if you can interpret what
it’s doing before reading the explanation ahead.
The pattern in Example 24 captures text from HTML tags differently, depending on whether an
id attribute is present. If so, the pattern extracts each attribute’s name and value. Otherwise, it
collects the attribute span in one lump.
Deconstructing the pattern reveals the distinct if, then, and else parts of the conditional.
(?mix)
^<div # Literal "<div" after start of line
(?( # Conditional statement (if)
[^>]+ # One or more characters that aren't ">"
id=" # Literal 'id="'
[^"]+ # One or more characters that aren't '"'
" # Literal '"'
) # Subexpression if condition matches (then)
(?: # Noncapturing group
\s* # Zero or more space characters
(?<name> # Capturing group "name"
[^"]+ # One or more characters that aren't '"'
) # Match group once
=" # Literal '="'
(?<value> # Capturing group "value"
[^"]+ # One or more characters that aren't '"'
) # Match group once
" # Literal '"'
)* # Match group zero or more times
\s* # Zero or more space characters
| # Subexpression if condition fails (else)
(?<attribs> # Capturing group "attribs"
[^>]+ # One or more characters that aren't ">"
) # Match group once
) # End of conditional construct
>$ # Literal ">" before end of line
Regex Deep Dive 364
The engine treats the if part of the conditional as zero-width. The yes (then) part
therefore needs to be capable of matching the same text, just as with a positive lookahead
(?=...).
As mentioned at the beginning of this section, you can also use the same conditional construct
to evaluate the success of an earlier named or unnamed capturing group. The standard syntax
for capture-based conditional matching is (?(NNN)yes|no) where NNN is a group index, or
(?(name)yes|no) where name is a group name.
True
False
False
True
The pattern in Example 25 matches a GUID (UUID) with or without curly braces {}. If it finds
an opening brace, a closing one must be present or the match fails. The example also shows that
the no (else) group is optional. You can omit the pipe | and only include a yes (then) group. It’s
possible to leave the yes group blank, too. This is true for both expression and capture-based
conditional constructs.
Conditionals Reference
You can view a complete reference for conditionals on the Alternation Constructs¹⁴ page
at Microsoft Docs.
¹⁴https://learn.microsoft.com/en-us/dotnet/standard/base-types/alternation-constructs-in-regular-expressions#conditional-
matching-with-an-expression
Regex Deep Dive 365
Group 0:
Capture: "(enclosed in parentheses)"
Group Open:
Group Close:
Capture: "enclosed in parentheses"
Notice that the opening parenthesis, ‘(‘, is absent for the Open group capture. The engine pushed
this capture to the stack when the Open group matched, but popped it when the Close-Open
balancing group matched. You can see that the engine captured the text between the Open capture
and Close-Open construct, and placed this in a capture of the Close group. Notice that the
closing parenthesis, ‘)’, is absent from any captures, too. The engine doesn’t capture the contents
of a balancing group construct, but does consume them.
You can omit the closing group name (?<-Opening>...) to prevent the capture of the inter-
mediate text. You can also use an empty balancing group construct (?<Closing-Opening>) to
guarantee the removal of the latest Opening capture.
Regex Deep Dive 366
The balancing group Quotes captures nested quoted phrases from the balanced string. The empty
StartQuote group is the indicator; the pattern uses a conditional (?(StartQuote)(?!)) to
assert this. The (?!) part is simply an empty negative lookahead, which always fails, and this
acts as a breakpoint if StartQuote still has captures. Therefore, when an uneven number of
quotes breaks the balance, StartQuote has leftover captures and the conditional causes match
failure.
To understand what’s happening in Example 27, the next example displays what each group
matches.
Regex Deep Dive 367
Group "0":
Capture 0 (pos 0) = "Hello?", I queried. The stranger replied, "Why is
"Hello" a question?".
Group "QuoteSpans":
Capture 0 (pos 0) = "Hello?", I queried. The stranger replied,
Capture 1 (pos 43) = "Why is "Hello" a question?".
Group "StartQuotePlusContents":
Capture 0 (pos 0) = "Hello?
Capture 1 (pos 43) = "Why is
Capture 2 (pos 51) = "Hello
Group "StartQuote":
Group "EndQuoteAndPostfix":
Capture 0 (pos 7) = ", I queried. The stranger replied,
Capture 1 (pos 57) = " a question?
Capture 2 (pos 70) = ".
Group "Quotes":
Capture 0 (pos 1) = Hello?
Capture 1 (pos 52) = Hello
Capture 2 (pos 44) = Why is "Hello" a question?
Regex Deep Dive 368
This means that the pattern has to enter and exit several deeper levels recursively, before exiting
the current level. In the list, this occurs when “Open” or “Close” happen consecutively (3 → 4
and 5 → 6). The one-or-more + quantifier on the two quote groups (StartQuotePlusContents
and EndQuoteAndPostfix) supports this, as many opening captures can occur successively
before the balancing groups. The balancing groups can pop these captures to close those quotes
successively, too.
The pattern also has to handle entering and exiting many instances of the same quote level.
In the list, this occurs when “Open” follows a “Close” of the same level (2 → 3). The zero-or-
more * quantifier on the QuoteSpans group supports this, as many capture push-pop cycles can
occur for the inner groups. There are two instances of the first quotation level, which is why the
QuoteSpans group has two captures. Each quote span contains quotation marks, quote contents,
and the text that follows until the next quote.
The StartQuotePlusContents group reveals the text that StartQuote captures before Quotes-
StartQuote removes it. Captures 0 and 1 are the level 1 quotes, while Capture 2 is the level 2
quote, both with starting quotation marks. The EndQuoteAndPostfix group shows what the
Quotes-StartQuote group would have captured if it wasn’t a balancing group, along with the
text following the quotes.
The pattern in Example 27 behaves exactly the same as Example 28, but uses noncapturing groups
in place of StartQuotePlusContents, EndQuoteAndPostfix, QuoteSpans.
¹⁵https://learn.microsoft.com/en-us/dotnet/standard/base-types/grouping-constructs-in-regular-expressions#balancing-group-
definitions
14. Regex Best Practices
That’s it! You’ve now covered all the major topics on regexes in PowerShell and .NET. This
chapter rounds off the regex part of the book with some important concepts that’ll aid you in
crafting effective regexes.
369
Regex Best Practices 370
static methods support a matchTimeout parameter, which aborts the match process and throws
an exception if the operation takes longer than the time-out period. This becomes important
in production environments where unconstrained input coupled with vulnerable patterns could
become a regex denial-of-service (ReDoS).
All .NET and .NET Framework versions since 4.5 support regex time-outs.
You can see a regex time-out in action in Example 3 from the Regex Deep Dive chapter. Here,
it prevents catastrophic backtracking from continuing unchecked. The native PowerShell regex
operators don’t support a match time-out, so use this feature of the .NET methods in scenarios
with unsupervised commands or in production environments.
Both patterns in the example capture one sentence in each match and each word of the sentence
as extra captures of that match. The second pattern generates excessive captures, however. It
captures both the individual words and those words followed by any spaces. This means that
each match contains two copies of almost identical text.
It’s therefore important to use non-capturing (?:...) groups wherever you don’t need to extract
text, such as grouping constructs for repetition. You can also use the ExplicitCaptures regex
option or inline option (?n) to disable capturing for all unnamed groups.
Of course, if your pattern is going to change, you might need to create a new object in each
method or function call. In these circumstances, apply the same rule and minimize the number
of instances of the same pattern.
• Should you process the text line-by-line, or as a whole? Either can be more or less efficient
than the other in different contexts. Looping statements with a pattern for single lines are
often simpler but don’t work for target matches that span many lines.
• Could you achieve the same result with several simpler patterns and PowerShell statements?
Simpler patterns are much easier to debug. Complex patterns can be more efficient than
switching between programmatic and regex logic several times.
• Should there be some preprocessing? Filtering and formatting the input text (with more
regexes or otherwise) before passing it on to the extraction pattern can simplify the problem.
This is especially important with unconstrained input.
• What’s faster to create? What’s faster to run? Some solutions might be quick to put together,
but at the cost of performance. Consider how efficient your pattern-matching needs to be,
and balance it against how much time you have to deliver a solution.
Before you craft a regex pattern to solve a problem, prepare a strategy. This could involve testing
pattern fragments with various inputs, or looking at the consistency of sample data. It may
involve evaluating the performance of one approach versus another, such as line-by-line and
whole-text matching. An idea for the solution will often emerge during this planning step, and
it’s simply a matter of putting all the parts together and refining them.
This section considers the pattern from Example 1 and discusses potential refinements. The
requirement is that the pattern captures individual sentences and isolates each word within a
sentence, without accompanying punctuation. After overcoming the catastrophic backtracking
with an atomic group, the pattern looks like this:
(?m)\b(?>([\w"'\(\)/-]+)[;,]?\s*)+[.?!]+
It’s now more well-behaved, but there’s still needless computation when at least one sentence
doesn’t end with a valid terminator (.?!:). This is because the engine backtracks and tries to find
a match starting at each subsequent word boundary. For the string ‘This has no end’, it would
try the following.
In each instance, the match fails because of the lack of sentence termination, but the engine keeps
trying shorter matches until it runs out of whole words. This isn’t catastrophic backtracking, but
affects the pattern’s efficiency.
There are a variety of approaches that could improve the efficiency of this pattern. You can
remove the need for a word boundary altogether by using a more advanced beginning-of-
sentence assertion. The positive lookbehind (?<=^\s*|[.?!]\s+) asserts that before the current
position, there must be either:
Regex Best Practices 374
This change improves performance by excluding some match candidates before the engine
processes the rest of the pattern, effectively providing a short-circuit. Since .NET supports
variable-length lookbehinds, you can even match a variable number of spaces before the sentence.
Integrating this into the pattern gives:
(?m)(?<=^\s*|[.?!]\s+)(?>([\w"'\(\)/-]+)[;,]?\s*)+[.?!]+
The pattern still matches no sentences that contain unexpected characters. Using a negative
custom character class solves this problem. The class [^.?!;,\s] matches anything except
spaces (\s), sentence terminators (.?!), and sentence dividers (:,). Assuming this is the desired
behavior, the pattern now looks like this:
(?m)(?<=^\s*|[.?!]\s+)(?>([^.?!;,\s]+)[;,]?\s*)+[.?!]+
Since the pattern is now quite long, the following snippets use the IgnorePatternWhitespace
option, with patterns spread over several lines.
(?mx)
(?<=^\s*|[.?!]\s+) # 1. Beginning-of-sentence assertion
(?> # Word matching group, no backtracking
[\('"/:.]* # 2. Any characters before a word
([^.?!;,\s\(\)"'/]+) # 3. Word characters (capture)
[;,\)'"/]* # 3. Any characters after a word
\s* # 4. Zero or more spaces
)+ # Match one or more words
[.?!]+ # 5. One or more sentence terminators
The pattern now handles characters that appear before words (part 2), and after words (part 4).
Part 3 in the pattern is the capture, which now excludes these extra characters. This is necessary
to prevent greedy matching from consuming characters that part 4 would capture, or part 2 from
giving up characters to part 3 in any backtracking scenarios. Another benefit is that the pattern
now matches decimal points within numbers (such as 6.47 and .39).
Is the pattern fit for purpose now? The only way to discover this is to run tests.
Immediately, two further problems are apparent: The pattern interprets the apostrophe ' in
“customer’s” as a boundary between two words. It also interprets the period . in ‘Jane E. Doe’ as
a sentence terminator.
Regex Best Practices 376
Tackling the apostrophe issue first, one solution is to remove the exclusion from the main word
group and use a negative lookbehind to ensure an apostrophe doesn’t appear at the end of a word.
While it would be possible to use conditional logic and balancing groups to differentiate single
quotation marks from apostrophes, it’s important to weigh pattern complexity and efficiency
against covering every edge case. The pattern now looks like this:
(?mx)
(?<=^\s*|[.?!]\s+) # 1. Beginning-of-sentence assertion
(?> # Word matching group, no backtracking
[\('"/:.]* # 2. Any characters before a word
([^.?!;,\s\(\)"/]+) # 3. Word characters (capture)
(?<!') # 6. Must not end with apostrophe
[;,\)'"/]* # 3. Any characters after a word
\s* # 4. Zero or more spaces
)+ # Match one or more words
[.?!]+ # 5. One or more sentence terminators
Part 6 in the snippet is the new negative lookbehind, and part 3 no longer excludes apostrophes.
The second issue is more complex than it may seem. Accounting for a period as part of name
initials is relatively easy. You could add an earlier alternative inside the word group [A-Z]\.|...
to cover those scenarios.
(?mx)
(?<=^\s*|[.?!]\s+) # 1. Beginning-of-sentence assertion
(?> # Word matching group, no backtracking
[\('"/:.]* # 2. Any characters before a word
([A-Z]\.|[^.?!;,\s\(\)"/]+) # 3. Word characters (capture)
(?<!') # 6. Must not end with apostrophe
[;,\)'"/]* # 3. Any characters after a word
\s* # 4. Zero or more spaces
)+ # Match one or more words
[.?!]+ # 5. One or more sentence terminators
Part 3 now accounts for single-letter name initials. However, this introduces a new edge case. If
a sentence ends with a single uppercase letter and a period, the pattern will continue to match
the next sentence.
Consider the sentences “Smith takes Vitamin D. Jones doesn’t take anything.”. How would you
tell between two sentences and one sentence containing the name ‘Vitamin D. Jones’? To a
human, it’s obvious that Vitamin is unlikely to be a name, but without a contextual dictionary
of names and nouns, or the ability to parse sentence structure, it isn’t possible for a computer to
differentiate between them.
This is an edge case, but names with initials are more common than sentences that end with single
uppercase letters. Therefore, the modification is justifiable in this instance. These decisions and
considerations appear often when developing regexes. Deciding what to account for and what’s
beyond the scope of a reasonable solution is an important aspect of regex iterative development.
There are several further considerations that could drive pattern evolution. Should the pattern:
• Handle clause separators, such as em dashes (—), as word separators? The existing pattern
treats these as one hyphenated word if there aren’t spaces around the dash.
• Exclude more opening/closing punctuation around words? The existing pattern includes
punctuation, such as curly braces {}, with word captures. Unicode categories such as
opening punctuation \p{Ps} and closing punctuation \p{Pe} exist for this.
• Treat numbers with decimal points as single words? The existing pattern treats them as
two.
• Handle abbreviations such as ‘Mrs.’ and ‘Prof.’? The existing pattern breaks a sentence here.
• Account for a colon followed by a newline? The existing pattern treats the text following
as a continuing sentence.
• Handle punctuation from other languages with Unicode categories? The existing pattern
aims to match English sentences only.
These are just a handful of modifications that could be necessary depending on the scenario.
The refinements made in this section aren’t the only possible ones either. The examples have
demonstrated a single development pathway. How you tackle a pattern-matching problem will
be unique to each scenario.
Example 4: A pattern that meets the requirements but doesn’t cover all edge cases
1 $Sentences = @'
2 This sentence contains "quotes", (brackets), and the number 42.01.
3 Mr. E. Smith's [first] name is John—E is for example.
4 PowerShell is cross-platform.
5 This sentence isn''t valid
6 '@
7
8 $MyPattern = '(?m)(?<=^\s*|[.?!]\s+)(?>[\(''"/:.]*([A-Z]\.|' +
9 '[^.?!;,\s\(\)"/]+)(?<!'')[;,\)''"/]*\s*)+[.?!]+'
10
11 [regex]::Matches($Sentences, $MyPattern).ForEach{
12 Write-Host ('Sentence: "{0}"' -f $_.Value)
13 Write-Host (
14 ' Words: {0}{1}' -f ($_.Groups[1].Captures.Value -join '/'),
15 [Environment]::NewLine
16 )
17 }
Sentence: "This sentence contains "quotes", (brackets), and the number 42.01."
Words: This/sentence/contains/quotes/brackets/and/the/number/42/01
Sentence: "Mr."
Words: Mr
The take-away is that it’s important to test your patterns in a variety of contexts. You may
find that one development path isn’t working, and you may have to return to the analysis and
planning stage. That’s OK—humans can backtrack too, just like regex engines!
Regex Best Practices 378
It’s the second case that often causes headaches. As well as testing valid and invalid inputs to
your pattern-matching solution, test for near-matches. There are two kinds of near-matches that
cause unexpected and unwanted behavior:
• Input that should match your pattern, but doesn’t. This is a false negative.
• Input that shouldn’t match your pattern, but does. This is a false positive.
Consider the pattern for matching IP addresses in Example 33 from Accessing Regexes. If the
zero from [01]? was missing, most valid IP addresses would still match. Likewise, if no anchors
or lookarounds were present, most invalid IPs would still not match. However, false positives
and negatives arise without them.
Valid 2 is a false negative because the pattern wasn’t equipped to handle the zero in ‘051’. Invalid
2 is a false positive because the pattern can still match without consuming the final ‘6’. Anchors
such as ^ and $ overcome the false positive, while using [01]? instead of 1? overcomes the false
negative.
Example 5 demonstrates the importance of testing your regex patterns and pattern-matching
solutions. Throw as many inputs at them as you can. Ask yourself, “How can I break this?.” Your
pattern should be able to handle any input you could reasonably expect.
Where possible, aim to access regex result objects within a single thread.
If you must share singular result objects across threads, use the static Synchronized() method
to retrieve a thread-safe instance of the object. This method causes the engine to compute each
capture of every group, resulting in an immutable object that you can use between threads.³
²https://learn.microsoft.com/en-us/dotnet/standard/base-types/best-practices
³Microsoft. (2020, Jul. 08). System.Text.RegularExpressions - Group.cs. L42-L57. dotnet/runtime on GitHub. [Online]. Available: https://
github.com/dotnet/runtime/blob/main/src/libraries/System.Text.RegularExpressions/src/System/Text/RegularExpressions/Group.cs. [Ac-
cessed: Jan. 30, 2022].
Regex Best Practices 380
Note how the order of execution doesn’t match the order of the captures. ForEach-Object -
Parallel uses PowerShell runspaces in parallel and the order in which PowerShell processes
the input collection isn’t guaranteed. A thread-safe increment of the counter shows the order in
which processing starts for each capture. Because of the variability in web API response times,
the order in which processing completes varies from both the input and the starting order. The
order of the results differs from the order of the processed matches.
If you must enumerate result collections across threads, such as [MatchCollection], [Group-
Collection], and [CaptureCollection], use the SyncRoot property of the instance to
synchronize access by locking the object during access. Since PowerShell doesn’t have a lock
statement, use [System.Threading.Monitor] to ensure synchronized access to the collection.
Once again, the order in which processing takes place isn’t guaranteed.
[System.Threading.Monitor]::Enter(...) attempts to get an exclusive lock of an object. If
another thread has already locked the object, Enter() waits until it’s released. Exit() releases
a locked object locked with Enter().
First, take a look at the Modern IT Automation with PowerShell Extras⁵ repository on GitHub.
There are some more complex regex patterns there with real-world applications, complete with
breakdowns and explanations. You’ll find these in the Edition-01/Regex⁶ folder.
If you’d like a more in-depth journey into the world of regexes, look no further than Mastering
Regular Expressions⁷ by Jeffrey Friedl. While the latest edition of this book is from 2006, it’s a
truly comprehensive course. It also contains language-specific chapters, including for .NET. Since
.NET’s regex implementation hasn’t changed in recent years, the content still applies to current
versions of .NET and PowerShell at the time of writing.
On the other hand, if you’re looking for a more direct guide on creating regexes to solve specific
problems, Regular Expressions Cookbook⁸ by Jan Goyvaerts and Steven Levithan is a good
place to start. This book takes you through common pattern-matching problems and introduces
solutions in the context of popular programming environments. .NET is amongst these, so the
presented solutions need no modifications to work in PowerShell.
Another great resource with plug-and-play solutions for .NET is Regular Expression Pocket
Reference⁹ by Tony Stubblebine. This handbook contains both at-a-glance syntax reference and
regex solutions.
For a more academic take on regexes and regular expression theory, try Rex¹⁰. Rex is a tool
written by Microsoft Research’s RiSE Group¹¹ that efficiently generates matching inputs for one
or more .NET regex patterns using symbolic finite automata (SFA). It has resulted in several
publications¹², but is also useful for evaluating your own patterns. It provides valuable insight
into how the engine is interpreting your pattern. The original online version is no longer available,
but you can still download the original Rex binary¹³ from the Microsoft website.
For more recent developments, Rex is a part of the Automata .NET Library¹⁴ on GitHub, alongside
lots of other tools related to finite state automata (FSA) and transducers (FST). While there,
take a look at the Symbolic Regex Matcher¹⁵ library. This is an efficient alternative approach to
interpreting regular expressions using SFA.
Don’t forget to check out the further reading section below, which includes many of the resources
discussed in the regex part of this book, plus more.
– HTTP-only website
• Optimizing Regex Performance I—2010 blog entry from the Microsoft Base Class Library
team³⁸
• Optimizing Regex Performance II—2010 blog entry from the Microsoft Base Class Library
team³⁹
• Rex Project—RiSE Group Rex landing page⁴⁰
• Rex Introduction Video—Margus Veanus from RiSE explains SFA and Rex (archive)⁴¹
³⁸https://learn.microsoft.com/en-us/archive/blogs/bclteam/optimizing-regular-expression-performance-part-i-working-with-the-
regex-class-and-regex-objects-ron-petrusha
³⁹https://learn.microsoft.com/en-us/archive/blogs/bclteam/optimizing-regular-expression-performance-part-ii-taking-charge-of-
backtracking-ron-petrusha
⁴⁰https://www.microsoft.com/en-us/research/project/rex-regular-expression-exploration/
⁴¹https://web.archive.org/web/20210411024653/https://channel9.msdn.com/Blogs/Peli/Margus-Veanes-Rex-Symbolic-Regular-
Expression-Exploration/
V PowerShell Security
“Sticking your head in the sand might make you feel safer, but it’s not going to protect you from
the coming storm.” — Barack Obama
PowerShell security has always been an afterthought in organizations, putting users and admin-
istrators at risk. Because of the scope of its complexity, malware often uses PowerShell in its
payload or as a launchpad. PowerShell has four security pillars comprising different aspects of
itself. They are:
1. Script Development: Educating the development of scripts using best security practices.
2. Script Execution: To reduce unauthorized script execution, enforce policies on Script
Execution.
3. Console Execution: To reduce single-liner attacks, implement policies on Console Execu-
tion.
4. PowerShell Remoting: To reduce lateral attacks, implement policies on PowerShell remot-
ing sessions.
All tiers must be understood, implemented, and configured correctly to minimize the risk. The
topics in this section target these tiers with:
• Script Signing.
• Script Execution Policies.
• PowerShell Constrained Language Mode.
• PowerShell Just Enough Administration (JEA).
15. Script Signing
PowerShell allows you to protect your scripts from tampering by signing them with a digital
signature.¹ Signing a script with a digital signature requires that you have a code signing
certificate. This chapter discusses why you should sign your scripts and what options are
available to you. You’ll learn about working with digital signatures and code signing certificates,
as well as how to implement a script signing solution in your organization using a Public Key
Infrastructure (PKI).
This doesn’t necessarily mean the source is trustworthy or that the signed data is of high quality!
It gives you some confidence that the data hasn’t changed since it was signed, and that the creator
and signer are the same entity.
Almost all the code you run on a Windows machine is signed. Running signed code significantly
reduces the probability of executing malicious code. Most Windows software vendors sign their
compiled binaries before passing it on to you, the user. Microsoft signs all of the compiled binaries
which are a part of Windows. Doing this establishes trust that the underlying binaries have not
changed between publication and installation. The digital signature also allows administrators
to configure security software such that users can only run approved software releases.
Script signing allows you to add these protections to any script from any source. It also allows
you to have a higher degree of trust in scripts provided to you.
However, code signing doesn’t guarantee complete protection against malware. Here are two
examples:
You should include a code signing policy as part of your organization’s security strategy, but you
can’t rely on it alone.
¹Microsoft. (2022, Mar. 18), about Signing. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/powershell/mod-
ule/microsoft.powershell.core/about/about_signing. [Accessed: Sep. 15, 2022].
²https://www.gdatasoftware.com/blog/microsoft-signed-a-malicious-netfilter-rootkit
³https://www.welivesecurity.com/2018/07/09/certificates-stolen-taiwanese-tech-companies-plead-malware-campaign/
386
Script Signing 387
1. You can encrypt data with one of them and then use the other to decrypt this data. This is
asymmetric or public-key cryptography.⁴
2. It isn’t possible to calculate the value of one from the other, even with access to encrypted
data.
Store one of these numbers in a safe place, like a password—this is your private key. The other—
distribute freely to those who must be able to decrypt data encrypted with the first one—this
is your public key. You can use a private key to encrypt data and your recipients can use the
corresponding public key to decrypt it. Usually, this key pair comes in the form of a digital
certificate, which allows you to add more information, such as who issued it, what it’s for, and
the intended user.
However, the signed data isn’t encrypted because any recipient can read it clearly. What is
encrypted is the hash sum of that data. “What’s a hash sum?” you might ask.
A hash sum is a cryptographically calculated string, fixed in length and alphanumeric, which is
supposed to be unique for any unique piece of data. An important property of any hash sum is
that you can’t derive the original data from it; having only a hash sum, you can’t reconstruct
the original data.
PowerShell uses the SHA-256 algorithm to calculate hash sums for script signing, but
there are other algorithms available⁵.⁶
When you send a digitally signed message to someone, you calculate a hash sum for the content,
encrypt the hash sum with your private key and send both the message and the encrypted hash
sum. The recipient calculates a hash sum for the message again, decrypts the received hash sum
using your public key, and compares them. If both hash sums are the same, it means the message
remained unchanged during transmission. It also means that it’s you who sent it, because no one
else has access to your private key (or at least no one should have access to it).
A digital certificate can be self-signed or signed by another certificate (usually the certificate’s
parent, in the case of a certificate authority hierarchy). In case of a self-signed certificate, the
person (or a machine) which created it, certifies by itself that it’s the signer that made this
certificate. In self-signed certificates, Issuer and Subject fields are the same.
When a certificate is signed by another entity, we say that the other entity issued the certificate.
In that case, you’ll see who issued the certificate in the Issuer field of the certificate.
Usually, certificates are issued by Certification Authorities, which are special services trusted by
people and organizations worldwide. We trust their policies and processes to issue certificates
only for legitimate purposes and only to subjects whose identity is established.
To trust a Certification Authority (CA) in Windows, you take its root certificate and place it into
a designated certificate store—Trusted Root Certification Authorities. A CA’s root certificate is
a certificate self-signed by that root CA. A CA uses this certificate to sign all other certificates
issued by it.
In the case of a single-tier CA in Windows, all certificates issued by the root CA will be valid.
However, a best-practice implementation requires that there be at least two tiers of CAs in a
Windows environment. In that case, the root CA will issue one certificate to a subordinate CA.
The subordinate CA will issue all other certificates.
Windows has many root certificates already installed in the Trusted Root Certification Authori-
ties store, out of the box. However, you’re free to remove CAs you don’t trust from that store or
add other CAs (or self-signed certificates) there.
Microsoft supports a list of all Certification Authorities⁸ which are trusted by Windows
by default.
As soon as the Windows OS trusts a CA’s root certificate, all other certificates issued by this and
subordinate CAs will be trusted automatically. However, there are conditions. Today’s date must
be within the certificate’s validity period (between “Valid from” and “Valid to” dates), and the
certificate itself must not be revoked.
A revoked certificate is a certificate whose serial number (a unique identifier) was added into a
Certificate Revocation List (CRL). A CRL is just a list of certificates’ serial numbers along with
the date/time when each was added to the list. This list is periodically published by a CA. The
CA adds a link to that list to every certificate it issues (CRL Distribution Point). When a client
checks a certificate’s validity, it uses the CRL Distribution Points attribute of the certificate
to download the CRL and then searches for the certificate’s serial number on that list. If nothing
is found, you’re ready!
⁸https://learn.microsoft.com/en-us/security/trusted-root/participants-list
Script Signing 389
Windows performs these checks every time it sees a certificate. That’s not all that’s checked for
signed code, however. Code signing certificates must also have a “Code Signing” purpose in their
Enhanced Key Usage attribute.
In a signed PowerShell script, you have your code first and after it, at the end of the file, a
signature block.
The signature block consists of:⁹
A script is considered to have been signed only when it has a full, unmodified signature
block. Therefore, if you either remove the signature block completely or just tamper with
it a bit, by removing or adding a symbol to any line, PowerShell will treat this file as an
ordinary unsigned script.
No code is allowed after the signature block. If you put anything but comments at the end of a
signed .ps1 file, you’ll get this error:
This is actually a security feature: it prevents you from unknowingly running potentially
malicious code. A malicious actor might insert a block that looks just like a signature, but isn’t
one. For example, they might remove the last letter “k” from the first line and it will look like
this # SIG # Begin signature bloc. As mentioned already, because the signature is now
malformed, PowerShell won’t treat it as one and will consider the file unsigned. After that
fake signature, the attacker may then insert more code and sign the resulting script with a real
signature block at the actual end of the file. Since signature blocks are really long and it’s the
last element of a file, a person reviewing the script might not notice that there is more code after
the fake signature.
Thankfully, PowerShell’s got your back here. It searches for comments which look similar to
the signature block start line and then, if it finds any code after them, raises that error. What’s
important is that the check for code after the signature block executes before the check that
decides if the script is signed or not.
You can see the actual implementation of it in the PowerShell code¹⁰. If it wasn’t for this check,
the first signature, which is fake, would be ignored. The result would be that all code in the file
would be executed as a signed script, including the malicious addition.
digital signature to your script and et voilà! It’s really that easy. The hardest part is to make sure
your clients trust the certificate which you used to sign the scripts (this chapter covers this detail
a little later).
15.3.1.1 Self-Signed
This will give you a new code signing certificate in the local computer Personal store. Note that
your Thumbprint (and other attributes of the certificate) will be different than the value shown
here.
PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
Thumbprint Subject
---------- -------
F45E5297DA01A97E527F5AF262F29B4A8CCF2083 CN=MySelfCodeSigningCert
You can specify another store with the -CertStoreLocation parameter. For example,
you can use “Cert:\CurrentUser\My” for your user’s Personal store.
If you look at the certificate closely, you’ll see that its EnhancedKeyUsageList property contains
Code Signing as its value:
Script Signing 392
Example 4: The EnhancedKeyUsageList property shows that this is a code signing certificate
1 Get-Item -Path Cert:\LocalMachine\My\F45E5297DA01A97E527F5AF262F29B4A8CCF2083 |
2 Select-Object -Property 'EnhancedKeyUsageList'
EnhancedKeyUsageList
--------------------
{Code Signing (1.3.6.1.5.5.7.3.3)}
To check if you can use the certificate for code signing, use Get-ChildItem with the -
CodeSigningCert parameter:
PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
Thumbprint Subject
---------- -------
F45E5297DA01A97E527F5AF262F29B4A8CCF2083 CN=MySelfCodeSigningCert
Self-signed certificates are fine for testing, but have a major downside: no one trusts the
certificates. You must install the certificates on every computer where they might be needed.
Doing so is time-consuming and difficult to manage. The preferred method to issue certificates
in an organization is to use a Public Key Infrastructure (PKI). A Public Key Infrastructure is a
hierarchy of Certification Authorities, where you have usually one top-level CA (root CA) and
several subordinates.
By trusting the root CA, you effectively trust every certificate issued by it. You’ll deploy the
certificate of that root CA to your machines (and the certificates of any intermediate CAs) and
the computers in the organization will trust your code signing certificates (and other certificates
issued by these CAs) automatically.
The Use Your Own PKI section below talks about how you can set up a PKI.
An alternative to hosting a PKI by yourself is to hire somebody to do that for you. You have two
options here:
Script Signing 393
1. Order code signing certificates one by one from a commercial certificate provider, like
Digicert, GlobalSign, Sectigo, etc. This is a relatively cheap, quick, and easy option, but
you’ll pay for every certificate you issue—this might not be the most cost-effective choice
if you plan to issue a lot of certificates.
2. Sign-up for a managed PKI service. This type of service is a full-scale PKI, where you have
all the flexibility, but don’t have to worry about the management of this infrastructure.
Many security companies offer PKI-as-a-Service: SecureW2, HydrantID, Entrust, just to
name a few.
The PowerShell Certificate provider¹² has slightly different names for these stores:
GUI Name Certificate Provider Name
Personal My
Trusted Publishers TrustedPublisher
Trusted Root Certification Authorities Root
For the sake of demonstration, the following examples use a self-signed certificate, which means
both $RootCert and $SigningCert are the same. In a real environment, you’ll most likely have
a certificate from a proper chain with a separate root certificate. Ensure that you put the correct
objects in $RootCert and $SigningCert variables.
¹²https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.security/about/about_certificate_provider
Script Signing 394
First, you need to save your code signing certificate in a variable for easy access:
The above (and the examples below) assumes that there is only one code signing
certificate in the Personal store. If there are more, then the $Cert object needs to be
indexed in the examples below.
Since it’s a self-signed certificate, you must install it as a root certificate too:
The above example doesn’t use Import-Certificate because that cmdlet requires
a certificate to be in a file. Using the .NET class instead allows skipping the step of
exporting the certificate to a file on disk.
Example 8: Signing a script with a code signing certificate from the user’s personal store
1 $ScriptPath = Join-Path -Path $env:Temp -ChildPath 'test1.ps1'
2 Set-Content -Path $ScriptPath -Value 'Write-Host "Signed Script"'
3 $Cert = Get-ChildItem -Path 'Cert:\CurrentUser\My' -CodeSigningCert
4 Set-AuthenticodeSignature -Certificate $Cert -FilePath $ScriptPath
Reminder: The above example assumes that only one code signing certificate is in the
personal store.
• http://timestamp.digicert.com
• http://timestamp.sectigo.com
• http://timestamp.verisign.com/scripts/timstamp.dll
• https://www.freetsa.org/
• SignServer¹⁵
• uts-server¹⁶
Just choose one of the time stamp servers and use it in the -TimestampServer parameter:
15.3.5.1 Functions
In PowerShell, you can’t sign a function itself, because PowerShell only supports signing files.
What you can do is to have one function per .ps1 file and therefore, when you sign that file, you
effectively sign the function.
The content of your .ps1 files will look like this:
1 function Do-Stuff {
2 'stuff done'
3 }
4 # SIG # Begin signature block
5 # ...
6 # SIG # End signature block
When dot-sourcing a file to import the function from it into your current session, you’re still
executing that script. When you execute a script, the system checks its signature and, if the
signature check doesn’t pass, the function won’t import.
15.3.5.2 Modules
Signing a module is very easy: just sign every .ps1, .psm1, .psd1, and .ps1xml file using the Set-
AuthenticodeSignature cmdlet.
Now, why sign all these files, when you could just sign the main .psm1? It’s because you sign
individual files to prevent tampering.
Suppose your module consists of several .ps1 files plus a .psd1, and a .psm1, which loads all the
.ps1 files. In that case, a malicious actor could change those .ps1 files and the module would still
load, so you need to sign everything.
¹⁵https://www.signserver.org/
¹⁶https://github.com/kakwa/uts-server
Script Signing 397
1. PowerShell (Get-AuthenticodeSignature)¹⁷
2. Sysinternals sigcheck¹⁸
3. signtool.exe¹⁹
4. And of course you can open file properties in Explorer and look at the Digital Signatures
tab.
15.4.1 Get-AuthenticodeSignature
This is what a proper output from Get-AuthenticodeSignature looks like for a trusted
certificate:
SignerCertificate : [Subject]
CN=MySelfCodeSigningCert
[Issuer]
CN=MySelfCodeSigningCert
[Serial Number]
1B599557458628A54D3D9EA33D15C160
[Not Before]
13/06/2021 23:38:18
[Not After]
13/06/2022 23:58:18
[Thumbprint]
F45E5297DA01A97E527F5AF262F29B4A8CCF2083
TimeStamperCertificate :
Status : Valid
StatusMessage : Signature verified.
Path : C:\test.ps1
SignatureType : Authenticode
IsOSBinary : False
And this is what you get when the root certificate of the certificate chain is missing from the
Trusted Root Certification Authorities store:
¹⁷https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.security/get-authenticodesignature?view=powershell-
7.1
¹⁸https://learn.microsoft.com/en-us/sysinternals/downloads/sigcheck
¹⁹https://learn.microsoft.com/en-us/dotnet/framework/tools/signtool-exe
Script Signing 398
SignerCertificate : [Subject]
CN=MySelfCodeSigningCert
[Issuer]
CN=MySelfCodeSigningCert
[Serial Number]
1B599557458628A54D3D9EA33D15C160
[Not Before]
13/06/2021 23:38:18
[Not After]
13/06/2022 23:58:18
[Thumbprint]
F45E5297DA01A97E527F5AF262F29B4A8CCF2083
TimeStamperCertificate :
Status : UnknownError
StatusMessage : A certificate chain processed, but terminated in
a root certificate which is not trusted by the trust provider.
Path : C:\test.ps1
SignatureType : Authenticode
IsOSBinary : False
15.4.2 Sigcheck
This is what a proper output from sigcheck looks like for a trusted certificate:
c:\test.ps1:
Verified: Signed
Signing date: 00:03 14/06/2021
Publisher: MySelfCodeSigningCert
Company: n/a
Description: n/a
Product: n/a
Prod version: n/a
File version: n/a
MachineType: n/a
And this is what you get when the root certificate of the certificate chain is missing from the
Trusted Root Certification Authorities store:
Script Signing 399
c:\test.ps1:
Verified: A certificate chain processed, but terminated in a root
certificate which is not trusted by the trust provider.
File date: 00:03 14/06/2021
Publisher: MySelfCodeSigningCert
Company: n/a
Description: n/a
Product: n/a
Prod version: n/a
File version: n/a
MachineType: n/a
15.4.3 Signtool
This is what a proper output from signtool looks like for a trusted certificate:
File: C:\test.ps1
Index Algorithm Timestamp
========================================
0 sha1 None
Below is what you’ll see when the root certificate of the certificate chain is missing from the
Trusted Root Certification Authorities store:
File: C:\test.ps1
Index Algorithm Timestamp
========================================
SignTool Error: A certificate chain processed, but terminated in a root
certificate which is not trusted by the trust provider.
Number of errors: 1
And of course, if your system doesn’t have the proper root certificate installed, you’ll receive an
error about that too:
Example 14: Attempting to run a signed script where the root of the certificate chain is untrusted
1 C:\test.ps1
You might have different reasons to want your own Public Key Infrastructure. Also, your
company may already have its own PKI. You should ask the proper parties about this. Here
are some examples as to why you may desire your own PKI:
• Compliance. Some regulations might require you to have all sensitive/security information
on your premises.
• Security. You might not trust how a service provider will manage your PKI. Then, of course,
you must do it by yourself.
Script Signing 401
• Availability. With your own servers, you can achieve any availability level, which isn’t
always an option with managed services.
• Flexibility. Only you define which certificates, how, and to whom you’ll issue them. SaaS
(Software as a Service) isn’t always that flexible.
• Cost. In general, doing things by yourself is cheaper. However, note that this is not a
universal rule. Sometimes the opposite is true, because supporting your own infrastructure
requires good engineers and hardware—these cost money.
• Public doesn’t mean it will be available to anybody—it just means that this system uses
public-key cryptography.
• Two-tiered means that the PKI hierarchy will have two tiers: a root certification authority
and an issuing certification authority—this is sufficient for most organizations.
You’ll need at least two servers with Windows Server 2019 or later, which have no other software
installed on the server. One of them must be a member of an AD DS (Active Directory Domain
Services) domain (the issuing CA), the other should be just a workgroup machine (the root CA).
You will also need a web server: you’ll put CA certificates and CRLs there (this can even be
a non-Windows machine). This is often the issuing CA. Using the issuing CA as a web server
is simpler than using a separate server as a PKI web server. Also, you must be an Enterprise
Administrator in this AD DS forest to install the proper Windows roles and serveices.
It often makes sense for the same group of people to manage both AD DS and ADCS
(Active Directory Certificate Services—the PKI) infrastructures: both of them are usually
company-wide and both issue cryptographic assertions about identities. Basically, you
want to protect your PKI servers as carefully as you protect domain controllers. All this
also applies to AD FS (Active Directory Federation Services) as well.
15.5.1.2.1 Root CA
The first step in building your own PKI is to configure a root certification authority. For best
practices, this must be a VM which can be normally offline (powered off) or a physical machine
that can be turned off.²⁰ Even though the VM or physical hardware is normally powered-off,
this does NOT reduce the need that the instance (physical or virtual) must be properly backed
up. This VM or physical hardware must not be a member of the domain/forest.
²⁰TechNet wiki contributors. (2016, Aug. 12). Active Directory Certificate Services (AD CS) Public Key Infrastructure (PKI) Design Guide.
Microsoft TechNet Wiki. [Online]. Available: https://social.technet.microsoft.com/wiki/contents/articles/7421.active-directory-certificate-
services-ad-cs-public-key-infrastructure-pki-design-guide.aspx#Use_Offline_CAs. [Accessed: Sep. 16, 2022].
Script Signing 402
For the best protection of the root CA’s private key, the industry standard is to store the
private key on a smart card or a USB HSM (Hardware Security Module); but of course,
that also has its own downsides. If that smart card stops working, or if you lose the
PIN code for it, you’ll have to reinstall your PKI from scratch (not immediately, but
eventually, because you won’t be able to renew subordinate CAs certificates anymore
or issue new CRLs ([Certificate Revocation Lists]). The benefit of having a private key
on a smart card is that it’s very difficult to extract the key from there. But for the next
base case, you can store the backup of the root CA’s private key on another medium,
with proper security.
The first step in ADCS CA installation is to install the Windows feature containing all of the
required bits:
Example 15: Installing the Active Directory Certificate Services CA feature on the root CA
1 Install-WindowsFeature -Name ADCS-Cert-Authority -IncludeManagementTools
This example specifies that the root CA’s name will be “My Root CA” (the -CACommonName
parameter), and the validity period of its certificate will be 10 years (-ValidityPeriod and -
ValidityPeriodUnits parameters). The certificate will use elliptic curves cryptography instead
of classic RSA and its key length will be 521 bits (-CryptoProviderName and -KeyLength
parameters). For the hash algorithm, the example uses SHA-2 with a length of 512 bits (-
HashAlgorithmName). Also, note that for a root CA, which is not to be a member of a
domain/forest, you must pass “StandaloneRootCA” to the -CAType parameter. Therefore, the
VM or the standalone server must not be a member of the domain/forest!
How were these cryptographic parameters chosen? The example uses the highest available for
the chosen Key Storage Provider²¹ because this is a root CA: it will rarely issue certificates, so
²¹https://learn.microsoft.com/en-us/windows/win32/seccertenroll/cng-key-storage-providers
Script Signing 403
there are no performance concerns, and the key must be as secure as possible. As for the hash
length, it’s usually recommended to have the hashlength to be of the same size (or close to it) as
your elliptic curve. Note also, that while for demonstration purposes the example uses “Microsoft
Software Key Storage Provider,” this isn’t a best practice for production environments—if one is
available, you should really use a smart card or an HSM. Consult the documentation for your
key storage provider to choose the most secure set of parameters. Also note, most companies do
not have a HSM, so they use the available Microsoft Key Storage Providers.
There are two more important concepts when you deal with certificate authorities: CDP and
AIA.
CDP (CRL Distribution Point) is a location where the certificate authority will store a Certificate
Revocation List (CRL). Your clients periodically download this list and use it to check to see
whether a certificate presented to them is revoked. When a certificate is revoked, that means
that the certificate should no longer be trusted. The main part of the revocation process is to put
the serial number (a unique identifier) of a certificate into a CRL. If a certificate’s serial number
is in a CRL, it means the certificate is revoked.
To get a list of CDPs on a CA, use Get-CACrlDistributionPoint:
PublishToServer : True
PublishDeltaToServer : True
AddToCertificateCdp : False
AddToFreshestCrl : False
AddToCrlCdp : False
AddToCrlIdp : False
Uri : C:\Windows\system32\CertSrv\CertEnroll\
<CAName><CRLNameSuffix><DeltaCRLAllowed>.crl
PublishToServer : False
PublishDeltaToServer : False
AddToCertificateCdp : False
AddToFreshestCrl : False
AddToCrlCdp : True
AddToCrlIdp : False
Uri : ldap:///CN=<CATruncatedName><CRLNameSuffix>,
CN=<ServerShortName>,CN=CDP,CN=Public Key Services,
CN=Services,<ConfigurationContainer><CDPObjectClass>
PublishToServer : False
PublishDeltaToServer : False
AddToCertificateCdp : False
AddToFreshestCrl : False
AddToCrlCdp : False
²²https://www.keylength.com/en/
Script Signing 404
AddToCrlIdp : False
Uri : http://<ServerDNSName>/CertEnroll/
<CAName><CRLNameSuffix><DeltaCRLAllowed>.crl
PublishToServer : False
PublishDeltaToServer : False
AddToCertificateCdp : True
AddToFreshestCrl : True
AddToCrlCdp : False
AddToCrlIdp : False
Uri : file://<ServerDNSName>/CertEnroll/
<CAName><CRLNameSuffix><DeltaCRLAllowed>.crl
If the root CA is not a member of the domain, then the LDAP CDP does nothing and can
be deleted. If the root CA is offline, then the HTTP CDP does nothing and can be deleted.
Therefore, in general, for an offline CA, the CDP should be copied to an intermediate or
terminal (issuing) CA, so that certificates can access those CRLs.
Authority Information Access (AIA) is a place from which clients can download the certificate
of a Certification Authority. When a client doesn’t have a certificate’s parent certificate, they’ll
use the AIA defined in a certificate to download the certificate’s parent.
This isn’t that important for a root CA: clients must have the root certificate installed locally
to trust the PKI that uses that root CA. But AIA is tremendously important for issuing and
intermediate CAs, because clients rarely have those certificates installed. This is especially
important for internal CAs. Intermediate and issuing CAs should have their certificates published
to endpoint devices via GPO or via an endpoint management system.
To get a list of AIA locations on a CA, use Get-CAAuthorityInformationAccess:
If the root CA is not a member of the domain, then the LDAP AIA does nothing and can
be deleted. If the root CA is offline, then the HTTP AIA does nothing and can be deleted.
Therefore, in general, for an offline CA, the AIA should be copied to an intermediate or
terminal (issuing) CA, so that certificates can access the necessary files.
Script Signing 405
You can see that, by default, a Windows CA has a fair number of CDP and AIA locations
configured. Often, you may not need all of them. Usually, a single location for all CAs in an
organization is enough. This is dependent upon the access which specific endpoints may have to
various resources within the organization.
Both CDP and AIA support several protocols which can be used to access these files. For maxi-
mum interoperability, however, you should just use HTTP. This also decreases the administrative
load of updating multiple endpoints and monitoring them. However, in an Active Directory
environment, publishing the certificate via LDAP makes the certificate easily available to all AD
endpoints.
You might also note that the Uri properties of all these objects contain strings in angle brackets
<>. These are variables specific to the Windows CA service. You can read about their meaning
in the documentation²³, but in this section you won’t use them.
Your clients learn which URIs to use as CDP or AIA locations by looking into a certificate’s prop-
erties: as you can see in the output above, CDPs and AIAs have several AddToCertificate...
properties—these define which of them will be included in a certificate issued by this CA.
You don’t need any of the CDPs already defined on this Root CA: since this is an offline server,
all those paths will be unavailable, anyway. You’ll use a separate HTTP server for that.
This chapter doesn’t discuss the installation of a web server to store CRLs and CA
certificates—you must set it up by yourself.
RestartCA
---------
True
True
True
True
This section assumes that your Windows VM or server is named ROOTCA01. This
section also assumes that you named your root CA “My Root CA.”
If you take a look in a folder where a Windows CA publishes its CRL and CRT files by default,
you’ll see that the file name for the certificate is “ROOTCA01_My Root CA.crt” and the CRL is
called “My Root CA.crl”:
²³https://learn.microsoft.com/en-us/powershell/module/adcsadministration/add-cacrldistributionpoint?view=windowsserver2019-ps
Script Signing 406
Example 20: Inspecting the default root certificate and CRL location
1 $Path = Join-Path -Path $env:SystemRoot -ChildPath 'system32\CertSrv\CertEnroll'
2 Get-ChildItem -Path $Path
Directory: C:\Windows\system32\CertSrv\CertEnroll
For AIA, remove every entry except for the one pointing to the local file system, because it’s
not possible to add it back using the standard cmdlet Add-CAAuthorityInformationAccess.
Therefore, the name of the certificate file will remain “ROOTCA01_My Root CA.crt.”
Example 21: Removing all AIA locations from the root CA except the one for the local file system
1 Get-CAAuthorityInformationAccess |
2 Where-Object -FilterScript {$_.Uri -notlike ('{0}*' -f $env:SystemRoot)} |
3 Remove-CAAuthorityInformationAccess -Force
RestartCA
---------
True
True
True
Next, you need to create new AIA and CDP locations. You can define several CDP locations, of
different types. For CDP, it will be an HTTP URI and a local file system location. For AIA, you
already have a file system location, so only an HTTP URI is left. You must have at least one local
file system location, so you can grab the files from there and distribute to other locations.
To add a CDP location, use Add-CACrlDistributionPoint. Note that not all of the cmdlet
parameters are compatible with each other and there are no parameter sets defined, because
the compatibility of the parameters depends on the type of URI you use. If you try to use an
incompatible combination, you’ll get an error like this one:
Script Signing 407
There are three switches compatible with HTTP URLs: AddToCertificateCdp, -AddToCrlIdp,
and -AddToFreshestCrl.
-AddToFreshestCrl specifies Delta CRL location. You won’t use Delta CRLs in this chapter.
Usually, you need Delta CRLs only when you revoke certificates often and your Base CRL
becomes too large for clients to download in a reasonable time. This is an ultra-rare case for
most folks.
-AddToCrlIdp is an Issuing Distribution Points (IDP) extension (OID 2.5.29.28) which is used
for partitioned CRLs. Partitioned CRLs aren’t supported by Windows PKI clients, therefore you
won’t use them in this chapter either.
However, the last one, -AddToCertificateCdp, is crucial: it specifies that all certificates issued
by this CA will have a pointer to this CRL. Without it, your clients won’t know where to look
for a CRL and you effectively won’t be able to revoke your certificates. All certificates you issue
must have at least one CDP URI defined. That CDP URI must be available to all the clients.
Assume you have a web server available via the domain name “pki.example.com” and that you
have created two folders in the root web directory: one for CDP files and one for AIA files. Let’s
define this web server as a CRL distribution point. Use the -AddToCertificateCdp parameter
to add this location into all certificates issued by the root CA—the clients will use this URL to
check if a certificate is revoked.
Script Signing 408
RestartCA
---------
True
The second CDP location is the local file system. You need this because you must be able to
retrieve a CRL from the CA in order to put it on the HTTP server. Publication to the local file
system generates a CRL file, ready for copying to any destination.
As you might notice, for simplification, these examples use filenames without spaces in both
commands. This isn’t a requirement—you can use different filenames, just don’t forget to rename
the file when you copy it to another location.
Example 24: Adding the file system CDP for the root CA
1 $Params = @{
2 Path = $env:SystemRoot
3 ChildPath = 'system32\CertSrv\CertEnroll\MyRootCA.crl'
4 }
5 $LocalCRLPath = Join-Path @Params
6 Add-CACrlDistributionPoint -Uri $LocalCRLPath -PublishToServer -Force
RestartCA
---------
True
Next, add an Authority Information Access (AIA) extension. This will insert a location of the root
certificate file into all certificates issued by your Root CA, thanks to the -AddToCertificateAia
parameter. Again, the filename is without spaces but, as you might remember, you didn’t touch
the existing AIA definition pointing to the local filesystem. You’ll need to rename the file
manually when copying it over to the web server, or just use the filename with spaces in the
-AddToCertificateAia parameter—the choice is yours.
Script Signing 409
Example 25: Adding a custom AIA location for the root CA certificate
1 $Params = @{
2 Uri = 'http://pki.example.com/AIA/MyRootCA.crt'
3 AddToCertificateAia = $true
4 Force = $true
5 }
6 Add-CAAuthorityInformationAccess @Params
RestartCA
---------
True
Now, take a look at the current CRL you have already issued at this CA. The most convenient
and easy way to do that in PowerShell is to install PSPKI²⁴, a module by Vadims Podāns. Since
this is an offline machine, download the module manually from the PowerShell Gallery²⁵ and
install it on the computer, then import it: Import-Module -Name 'PSPKI'.
To get information about a CRL use the Get-CertificateRevocationList cmdlet:
Version : 2
Type : BaseCrl
IssuerName :
System.Security.Cryptography.X509Certificates.X500DistinguishedName
Issuer : CN=My Root CA
ThisUpdate : 7/11/2021 11:01:52 PM
NextUpdate : 7/19/2021 11:21:52 AM
SignatureAlgorithm : sha512ECDSA (1.2.840.10045.4.3.4)
CRLNumber : 1
Extensions : {Authority Key Identifier (2.5.29.35),
CA Version (1.3.6.1.4.1.311.21.1),
CRL Number (2.5.29.20),
Next CRL Publish (1.3.6.1.4.1.311.21.4)...}
RevokedCertificates : {}
RawData : {48, 130, 2, 18...}
Handle :
System.Security.Cryptography.X509Certificates.SafeCRLHandleContext
Thumbprint :
075BD031E4331D21D5C4B6E841B36D2565DA035CF08785C4135789023D9E1D5D
²⁴https://www.pkisolutions.com/tools/pspki/
²⁵https://www.powershellgallery.com/packages/PSPKI/
Script Signing 410
You can see that the NextUpdate property in this CRL is set to about 7 days ahead, so you
must update the CRL by that date and make it available to your clients. Then repeat in a week.
And again and again. Given that this node requires manual intervention to bring it online and
issue commands on it, this process seems terribly annoying and unproductive (because you most
probably won’t issue and revoke certificates on the root CA that often).
A solution is to increase the CRL refresh interval to a more reasonable six months:
Example 27: Adjusting the CRL update period from days to months
1 certutil -setreg CA\CRLPeriod Months
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration\
My Root CA\CRLPeriod:
Old Value:
CRLPeriod REG_SZ = Weeks
New Value:
CRLPeriod REG_SZ = Months
CertUtil: -setreg command completed successfully.
The CertSvc service may need to be restarted for changes to take effect.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration\
My Root CA\CRLPeriodUnits:
Old Value:
CRLPeriodUnits REG_DWORD = 1
New Value:
CRLPeriodUnits REG_DWORD = 6
CertUtil: -setreg command completed successfully.
The CertSvc service may need to be restarted for changes to take effect.
Best practices say that the root CA is offline, except when it must be online. The most
common reason is to renew the certificate for a subordinate CA (or a chaining CA).
Practically, the root CA must have OS updates on some regular schedule (not necessarily
monthly, but regularly). Also the CRL for the root CA must be updated and copied to
CDPs on a regular basis (dependent on the CRL update period as discussed in Example
27 and Example 28). If you do not assign a NIC to the root CA, these procedures can be
quite difficult. These choices are dependent on your company’s overall security posture.
As another practical matter, if there is no NIC on the root CA, installation of PowerShell
modules can be challenging. Without a local NIC, modules must be downloaded
remotely (including dependencies), copied to the target, and then installed. This is in
comparison to a simple Install-Module cmdlet execution.
Script Signing 411
By default, this CA will issue certificates with a validity period of one year maximum. Usually,
you’d prefer something longer for CA certificates—let’s increase it up to five years. For this, use
the ValidityPeriodUnits property:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration\
My Root CA\ValidityPeriodUnits:
Old Value:
ValidityPeriodUnits REG_DWORD = 1
New Value:
ValidityPeriodUnits REG_DWORD = 5
CertUtil: -setreg command completed successfully.
The CertSvc service may need to be restarted for changes to take effect.
By default, the validity period is measured in years. You can check if this is true:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration\
My Root CA\ValidityPeriod:
If in your installation it returns something other than Years, use the following command
to correct the period:
To apply all these changes, you must restart the CA service. No need to restart the whole server,
just the service is enough (but if you prefer, you can reboot the machine, of course):
Example 31: Restarting the certificate service after making changes to the root CA
1 Restart-Service -Name CertSvc
Now, you can issue an updated CRL on this CA (it will be empty and that’s OK):
Script Signing 412
Example 32: Inspecting the root CA’s default certificate and CRL location before issuing a new CRL
1 $Pth = Join-Path -Path $env:SystemRoot -ChildPath 'system32\CertSrv\CertEnroll'
2 Get-ChildItem -Path $Pth
Directory: C:\Windows\system32\CertSrv\CertEnroll
Note that the new CRL file has the name we specified in the CDP configuration earlier:
Example 34: Inspecting the new CRL, which has a new file name defined in the CA configuration
1 $Pth = Join-Path -Path $env:SystemRoot -ChildPath 'system32\CertSrv\CertEnroll'
2 Get-ChildItem -Path $Pth
Directory: C:\Windows\system32\CertSrv\CertEnroll
Version : 2
Type : BaseCrl
IssuerName :
System.Security.Cryptography.X509Certificates.X500DistinguishedName
Issuer :
CN=My Root CA
ThisUpdate :
7/11/2021 11:12:45 PM
NextUpdate :
1/12/2022 10:32:45 AM
SignatureAlgorithm :
sha512ECDSA (1.2.840.10045.4.3.4)
CRLNumber 2 :
Extensions :
{Authority Key Identifier (2.5.29.35),
CA Version (1.3.6.1.4.1.311.21.1),
CRL Number (2.5.29.20),
Next CRL Publish (1.3.6.1.4.1.311.21.4)}
RevokedCertificates : {}
RawData : {48, 130, 1, 67...}
Handle :
System.Security.Cryptography.X509Certificates.SafeCRLHandleContext
Thumbprint :
A6E99F8127704FAFB5D21EBCD54B01C19A106EA8D54935C73C05702CEDF9C07D
Great, the next update date is now six months in the future!
15.5.1.2.2 Issuing CA
The Issuing CA is a server where you issue certificates to endpoints (computers and users),
including code signing certificates. Go to the computer that will be an issuing CA in your
infrastructure (this chapter assumes it has Windows Server installed on it already and is a
member of your AD domain).
You may have several issuing and root CAs in your environment, including several
issuing CAs under a single root CA. If you have a large environment, you may have
a third tier of CAs, with the intermediate (middle) tier known as “policy CAs.” Similar
to delegation of OUs in large environments, policy CAs allow you to delegate CA control
to administrative tiers.
First, we need to deliver the root CA’s certificate to the issuing CA (copy it manually or via RDP
copy-and-paste).
The location of the root CA’s certificate was discussed in Example 20.
This example places the file in the root of the system volume (the C: drive), but you can change it
to meet your needs. Import this certificate into the local “Trusted Root Certification Authorities”
computer store:
Script Signing 414
PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\Root
Thumbprint Subject
---------- -------
EABE909367E3D4F80B399AB8CC2677A5C9686C1F CN=My Root CA
Example 37: Installing the Active Directory Certificate Services CA feature on the issuing CA
1 Install-WindowsFeature -Name ADCS-Cert-Authority -IncludeManagementTools
Then install the CA itself. In this case, the CA’s name is “My Issuing CA,” the length of the
elliptic curve and hash function are both 256 bits, and the CA type is of course “EnterpriseSub-
ordinateCA.”
ErrorId ErrorString
------- -----------
398 The Active Directory Certificate Services installation is incomplete. To
complete the installation, use the request file "C:\CA02.ad.example.net_
My Issuing CA.req" to obtain a certificate ...
The installation is incomplete, because we need to obtain the certificate of this CA from our
root CA, and, since the root CA is offline, we must do this manually. That message tells us that
at “C:\CA02.ad.example.net_My Issuing CA.req” a generated certificate request is waiting. Let’s
look at it with the help of Get-CertificateRequest from the PSPKI module:
Example 39: Inspecting the certificate request file for signing the issuing CA’s certificate
1 Get-CertificateRequest -Path 'C:\CA02.ad.example.net_My Issuing CA.req'
RequestType : PKCS10
SubjectDn :
System.Security.Cryptography.X509Certificates.X500DistinguishedName
ExternalData :
Version : 1
SubjectName :
System.Security.Cryptography.X509Certificates.X500DistinguishedName
Subject : CN=My Issuing CA, DC=ad, DC=example, DC=net
PublicKey : System.Security.Cryptography.X509Certificates.PublicKey
Extensions : {CA Version (1.3.6.1.4.1.311.21.1),
Subject Key Identifier (2.5.29.14),
Certificate Template Name (1.3.6.1.4.1.311.20.2),
Key Usage (2.5.29.15)...}
Attributes : {0}
SignatureAlgorithm : sha256ECDSA (1.2.840.10045.4.3.2)
SignatureIsValid : True
RawData : {48, 130, 1, 242...}
Using this method you can validate whether all request parameters are correct. To proceed, copy
the request file to the root CA computer. The following commands are to be executed on the root
CA, not on the issuing CA.
To send the request to the certification authority, use Submit-CertificateRequest. Note that
we save the CA object in a variable—that’s because we’ll need it later.
Script Signing 416
Example 40: Submitting the issuing CA’s certificate request on the root CA
1 $CA = Connect-CertificationAuthority
2 $Params = @{
3 Path = $env:SystemDrive
4 ChildPath = 'CA02.ad.example.net_My Issuing CA.req'
5 }
6 $Path = Join-Path @Params
7 Submit-CertificateRequest -Path $Path -CertificationAuthority $CA
CertificationAuthority : PKI.CertificateServices.CertificateAuthority
RequestID : 2
Status : UnderSubmission
Certificate :
ErrorInformation : Taken Under Submission
If you check on the request, you’ll see that the certificate will be issued using the “SubCA”
template—this is a special template, which Windows PKI uses for subordinate (child) CA
certificates.
Example 41: Checking the status of the submitted request on the root CA
1 Get-PendingRequest -CertificationAuthority $CA
RequestID : 2
Request.RequesterName : ROOTCA01\Administrator
Request.SubmittedWhen : 8/29/2021 9:20:59 PM
Request.CommonName : My Issuing CA
CertificateTemplate : SubCA
CertificateTemplateOid : SubCA
RowId : 2
ConfigString : ROOTCA01\My Root CA
Table : Request
Properties : {[RequestID, 2],
[Request.RequesterName, ROOTCA01\Administrator],
[Request.SubmittedWhen, 8/29/2021 9:20:59 PM],
[Request.CommonName, My Issuing CA]...}
Example 42: Approving the issuing CA’s certificate request on the root CA
1 $Request = Get-PendingRequest -CertificationAuthority $CA
2 Approve-CertificateRequest -Request $Request
Script Signing 417
HResult StatusMessage
------- -------------
0 The certificate '2' was issued.
RequestID : 2
Request.RequesterName : ROOTCA01\Administrator
CommonName : My Issuing CA
NotBefore : 8/29/2021 9:11:49 PM
NotAfter : 8/29/2026 9:21:49 PM
SerialNumber : 6b00000002a4208d0a6e2af611000000000002
CertificateTemplate : SubCA
CertificateTemplateOid : SubCA
RowId : 2
ConfigString : ROOTCA01\My Root CA
Table : Request
Properties : {[RequestID, 2],
[Request.RequesterName, ROOTCA01\Administrator],
[CommonName, My Issuing CA],
[NotBefore, 8/29/2021 9:11:49 PM]...}
The last step on the root CA is to export the approved request into a file. Note the “RequestID_-
2.cer” file in the listing below: copy this file to the issuing CA. Note also that the example below
writes to the root of the system drive: this will not be allowed for non-administrative users.
Example 44: Exporting the issued certificate so that it can be copied to the issuing CA
1 $Item = Get-Item -Path $Env:SystemDrive
2 Receive-Certificate -RequestRow $IssuedRequest -Path $Item | Format-List
Thumbprint: E1BF218D5C4B58B16BABC003558222975B717E3A
Subject: CN=My Issuing CA, DC=ad, DC=example, DC=net
Example 45: Inspecting the system drive (C:) to show the newly issued certificate file
1 Get-ChildItem -Path $Env:SystemDrive
Script Signing 418
Directory: C:\
We’ve finished with the root CA—execute the subsequent commands on the issuing CA. Copy
the new certificate (RequestID_2.cer in this example) to the issuing CA computer. Using RDP
copy-and-paste is often the simplest way to accomplish this. If this is not available to you, using
a file share is often the next way to attempt it. In the worst case, using a mounted drive that you
transfer between VMs, is a way that always works, albeit with the most administrative overhead.
The last step in starting up a subordinate CA is to install the signed certificate to it. Unfortunately,
there is no native PowerShell cmdlet for this. Even PSPKI doesn’t have a function for that yet²⁶.
But the “legacy” command-line tools are here to help:
Example 46: Importing the certificate, signed by the root CA, on the issuing CA
1 certutil -installcert "$Env:SystemDrive\RequestID_2.cer"
This means that the issuing CA machine can’t reach the CDP that you defined in the
Root CA configuration step. The potential reasons are countless—maybe it’s a firewall
issue, or you didn’t add the DNS record, or perhaps you didn’t configure that web server
at all. To fix this, make the CDP available to the issuing CA and all your clients now—
never ignore certificate revocation problems!
Now you can start the “CertSvc” service and it will work.
²⁶https://github.com/PKISolutions/PSPKI/issues/151
Script Signing 419
However, you aren’t finished. Recall how we configured AIA and CDP locations for the root
CA? A subordinate CA is no different, because its default configuration is the same. Once again,
you need to:
PublishToServer : True
PublishDeltaToServer : True
AddToCertificateCdp : False
AddToFreshestCrl : False
AddToCrlCdp : False
AddToCrlIdp : False
Uri : C:\Windows\system32\CertSrv\CertEnroll\
<CAName><CRLNameSuffix><DeltaCRLAllowed>.crl
PublishToServer : True
PublishDeltaToServer : True
AddToCertificateCdp : True
AddToFreshestCrl : True
AddToCrlCdp : True
AddToCrlIdp : False
Uri : ldap:///CN=<CATruncatedName><CRLNameSuffix>,
CN=<ServerShortName>,CN=CDP,CN=Public Key Services,
CN=Services,<ConfigurationContainer><CDPObjectClass>
PublishToServer : False
PublishDeltaToServer : False
AddToCertificateCdp : False
AddToFreshestCrl : False
AddToCrlCdp : False
AddToCrlIdp : False
Uri : http://<ServerDNSName>/CertEnroll/
<CAName><CRLNameSuffix><DeltaCRLAllowed>.crl
PublishToServer : False
PublishDeltaToServer : False
AddToCertificateCdp : False
AddToFreshestCrl : False
AddToCrlCdp : False
AddToCrlIdp : False
Uri : file://<ServerDNSName>/CertEnroll/
<CAName><CRLNameSuffix><DeltaCRLAllowed>.crl
Script Signing 420
Example 49: Displaying the default AIA locations for the issuing CA
1 Get-CAAuthorityInformationAccess | Format-List
AddToCertificateAia : False
AddToCertificateOcsp: False
Uri : C:\Windows\system32\CertSrv\CertEnroll\
<ServerDNSName>_<CAName><CertificateName>.crt
AddToCertificateAia : True
AddToCertificateOcsp: False
Uri : ldap:///CN=<CATruncatedName>,CN=AIA,
CN=Public Key Services,CN=Services,
<ConfigurationContainer><CAObjectClass>
AddToCertificateAia : False
AddToCertificateOcsp: False
Uri : http://<ServerDNSName>/CertEnroll/
<ServerDNSName>_<CAName><CertificateName>.crt
AddToCertificateAia : False
AddToCertificateOcsp: False:
Uri : file://<ServerDNSName>/CertEnroll/
<ServerDNSName>_<CAName><CertificateName>.crt
Look at the folder content where the files are put by default:
Example 50: Inspecting the default certificate and CRL location on the issuing CA
1 Get-ChildItem -Path 'C:\Windows\System32\CertSrv\CertEnroll'
Directory: C:\Windows\System32\CertSrv\CertEnroll
Remove unneeded locations and add the ones which you’ll use, using the same principles as with
the root CA (in this case, meaning to keep only file and HTTP entries):
RestartCA
---------
True
True
True
True
By executing Example 51, you remove all of the CDPs (CRL Distribution Points) shown by
Example 48. This may, or may not, be appropriate for your production environment. Example
48 shows CDPs hosted: on the local file system, on a remote file system, on a web server, and in
Active Directory. In Example 53 below, the web server CDP is added back (with a generic URL).
In Example 54 below, the local file system CDP is added back. For your environment, you may
want to retain or add values back for some of the other options (especially Active Directory).
Example 52: Removing all AIA locations from the issuing CA except the one for the local file system
1 Get-CAAuthorityInformationAccess |
2 Where-Object -FilterScript {$_.Uri -notlike ('{0}*' -f $env:SystemRoot)} |
3 Remove-CAAuthorityInformationAccess -Force
RestartCA
---------
True
True
True
By executing Example 52, you remove all of the AIAs (Authority Information Access locations)
shown by Example 49, except for the AIA hosted on the local file system. This may, or may not,
be appropriate for your production environment. Example 49 shows AIAs hosted: on the local
file system, on a remote file system, on a web server, and in Active Directory. In Example 55
below, the web server CDP is added back (with a generic URL). For your environment, you may
want to retain or add values back for some of the other options (especially Active Directory).
RestartCA
---------
True
Script Signing 422
Example 54: Adding back the file system CDP for the issuing CA
1 $Params = @{
2 Path = $env:SystemRoot
3 ChildPath = 'system32\CertSrv\CertEnroll\MyIssuingCA.crl'
4 }
5 $Path = Join-Path @Params
6 Add-CACrlDistributionPoint -Uri $Path -PublishToServer -Force
RestartCA
---------
True
Example 55: Adding the custom AIA location for the issuing CA
1 $Params = @{
2 Uri = 'http://pki.example.com/AIA/MyIssuingCA.crt'
3 AddToCertificateAia = $true
4 Force = $true
5 }
6 Add-CAAuthorityInformationAccess @Params
RestartCA
---------
True
As you might remember, on the root CA we changed the CRL validity period, because that’s
an offline server and having a CRL valid only for a week is really inconvenient. For online
certification authorities, like “My Issuing CA,” this isn’t a problem, because you can easily
automate CRL release cycles with online servers. Therefore, you won’t change this setting here.
Let’s check that it’s set to one week:
Example 56: Confirming that the issuing CA’s CRL update period is in weeks
1 certutil -getreg CA\CRLPeriod
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration\
My Issuing CA\CRLPeriod:
Example 57: Confirming that the issuing CA’s CRL update interval is 1 week
1 certutil -getreg CA\CRLPeriodUnits
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration\
My Issuing CA\CRLPeriodUnits:
ValidityPeriodUnits REG_DWORD = 1
CertUtil: -getreg command completed successfully.
All seems good! Now you can restart the CA service and reissue the CRL to the newly defined
location:
Example 58: Restarting the certificate service on the issuing CA after making changes
1 Restart-Service -Name CertSvc
2 certutil -CRL
Example 59: Inspecting the default certificate and CRL location on the issuing CA after the changes have been
applied
1 Get-ChildItem -Path 'C:\Windows\System32\CertSrv\CertEnroll'
Directory: C:\Windows\System32\CertSrv\CertEnroll
Grab this new “MyIssuingCA.crl” file and the CRT file and place them onto the web server for
your clients to download when they’ll check for revocation.
We have not covered creating a web-based AIA or CDP in this chapter or configuring
those as defaults in the Certificate Authority. These are often hosted on an issuing CA
using IIS, but in larger environments may be hosted on a separate web server. When
hosted on a single issuing CA, the local file location can be used as the source of an
IIS virtual directory (this is C:\Windows\System32\CertSrv\CertEnroll by default).
But, as you see in Example 53 and Example 55, creating web-based AIAs and CDPs is
something which is usually done. If you host a web-based AIA or CDP on a separate
web server (or even if you have multiple issuing CAs), then you will need to script and
schedule the copying of CRLs and CA certificates to the target location. This is also true
for the root certificate and CRL.
Script Signing 424
You might think that’s it; you’re ready to issue certificates. Not quite, unfortunately. In Windows
PKI, there’s a concept called certificate templates. A certificate template is a set of properties
which define how a certificate can be issued, for which purposes, who can issue it, to whom, and
where that certificate can be stored.
For standalone certificate authorities, templates are stored in the registry. However, we are pri-
marily concerned with enterprise certificate authorities in this chapter. For enterprise certificate
authorities, certificate templates are AD DS objects which are located under the Configuration
Partition:
You can get a list of them with the Get-CertificateTemplate command (part of the PSPKI
module):
You should never use the built-in certificate templates and instead create your own. There are
several reasons:
1. The built-in templates don’t have all features available on the modern OSes. See that
SupportedCA property? While not going deep into the details, the more recent the OS,
the more features are available in the template.
2. It’s good to always have a default configuration set, so you could roll back to it and try
again if you mess settings of your certificate template.
So treat these just as you do default group policies: use them for reference. Don’t modify them;
instead, if you need to make changes to the templates, then make copies and work with those
template copies.
Usually, you create a copy of a template via the GUI, but this book isn’t about GUIs. ;)
Unfortunately, Microsoft doesn’t provide a built-in way to duplicate certificate templates in
PowerShell. An external module ADCSTemplate²⁷ exists, but using it is tricky. To create a new
Certificate Template, this module needs a description of that template in JSON form. Usually,
you get this JSON using the Export-ADCSTemplate cmdlet, but for this you need a template
with all the required properties to be in your AD DS already. To spare you from dealing with the
GUI, this chapter includes a JSON object for a code signing template:
22 "msPKI-Template-Schema-Version": 4,
23 "pKICriticalExtensions": ["2.5.29.15"],
24 "pKIDefaultKeySpec": 2,
25 "pKIExpirationPeriod": [0, 64, 57, 135, 46, 225, 254, 255],
26 "pKIExtendedKeyUsage": ["1.3.6.1.5.5.7.3.3"],
27 "pKIKeyUsage": [128, 0],
28 "pKIMaxIssuingDepth": 0,
29 "pKIOverlapPeriod": [0, 128, 166, 10, 255, 222, 255, 255]
30 }
31 '@
Note that the JSON has external line breaks due to book formatting limitations. To correct this
limitation, run the following command:
To create the JSON output, the example duplicates the built-in Code Signing template, upgrades
its SupportedCA property to Windows Server 2019, and declares that this template is for issuing
ECDSA certificates. Feel free to tweak it, according to your needs!
To import this template in your environment, run the following:
Example 63: Manually adding the missing property to the imported AD template using the same JSON string
1 $JSON = ConvertFrom-Json -InputObject $JSONRaw
2 $DN = (Get-ADCSTemplate -DisplayName 'My Code Signing').DistinguishedName
3 Set-ADObject -Identity $DN -Add @{
4 'msPKI-RA-Application-Policies' = $JSON.'msPKI-RA-Application-Policies'
5 }
The Set-ADObject cmdlet is part of the RSAT-ADDS Windows Feature and is not
installed by default on Certificate Authority servers. It also requires elevated permissions
(write-all-properties on the particular policy involved, typically assigned only to Domain
Admins and the object’s Creator/Owner). To install the feature on the CA server, you use
the Install-WindowsFeature -Name RSAT-ADDS -IncludeAllSubFeature cmdlet
from an elevated PowerShell session. Alternatively, after retrieving the distinguished
name on the CA server, you can run the cmdlet in an elevated PowerShell session on a
domain controller (where RSAT-ADDS is already installed by default).
The last step for the template is to publish the template on your issuing CA, therefore allowing
it to issue certificates using this template:
Script Signing 427
DisplayName Templates
----------- ---------
My Issuing CA {MyCodeSigning}
Before you use a certificate from your PKI, you need to make sure that the certificate of its
root certification authority is distributed among all potential users of the certificate in your
organization and all consumers of your signed scripts. You have several options here:
You can always import a certificate on any machine by manually executing commands. You
would certainly need this for non-domain clients, for example.
A preferred way to deploy root CA certificates to domain-joined machines is to use the built-in
functionality of Active Directory Domain Services: certificate containers. Those are containers
located under:
There are several of them, but we’re interested in the one storing root CA certificates:
CN=Certification Authorities.
The beauty of certificate containers is that by using them, you’re making certificates accessible
to ALL your domain clients at once. Our trusty PSPKI module has several functions to help with
certificate containers. Here’s how to use them:
Script Signing 428
To learn more about certificate containers, please see this very well-written article²⁸ by
Vadims Podāns.
15.5.1.2.7 DSC
If you haven’t worked with Desired State Configuration before, we have just the chapter
for you! Please refer to Infrastructure as Code and come back to this once you’re done.
1. Place the certificate file on an SMB share available to all computers to which you want to
deploy that certificate.
2. Configure a CertificateImport resource, as described in the example below.
3. Deploy the configuration to your nodes!
²⁸https://www.pkisolutions.com/understanding-active-directory-certificate-services-containers-in-active-directory/
²⁹https://github.com/dsccommunity/CertificateDsc/
³⁰https://dsccommunity.org/
Script Signing 429
Example 67: A DSC resource for importing a root CA certificate from an SMB share
1 Configuration DeployMyRootCertificate
2 {
3 Import-DscResource -ModuleName CertificateDsc
4
5 Node localhost
6 {
7 CertificateImport MyRootCertificate
8 {
9 Thumbprint = '0000000000000000000000000000000000000000'
10 Location = 'LocalMachine'
11 Store = 'Root'
12 Path = '\\ad.example.net\NETLOGON\ROOTCA01_My Root CA.crt'
13 }
14 }
15 }
1. The Thumbprint parameter of the configuration isn’t used when importing a new
certificate—you need it only when you want to remove a certificate from the machine.
However, the parameter is marked as mandatory, which means you must fill it with a
string that looks like a certificate thumbprint (there’s a check for that).
2. The Path parameter in this example points to a standard NETLOGON share. This is just
an example and might not be the best method for your infrastructure. You might use DSC
to manage non-domain machines which don’t have access to the NETLOGON folder. You
should also never distribute large files this way!
15.5.1.2.8 Intune
For workstations connected to Intune, you can use trusted certificate profiles³¹ to import root
certificates.
As the result of all this private PKI adventure, you can now issue yourself a new certificate:
Example 68: Inspecting a code signing certificate from the current user’s personal store
1 $Params = @{
2 Template = 'MyCodeSigning'
3 CertStoreLocation = 'Cert:\CurrentUser\My'
4 }
5 Get-Certificate @Params
³¹https://learn.microsoft.com/en-us/mem/intune/protect/certificates-trusted-root
Script Signing 430
Status Certificate
------ -----------
Issued [Subject]...
Hmm… Not much info in the output, so let’s take a closer look at it:
Example 69: Retrieving a list of certificates in the current user’s personal store
1 $Cert = Get-ChildItem -Path 'Cert:\CurrentUser\My'
2 $Cert | Format-List
Thumbprint: BC85FF727E7DA52FC34AD92B75B7FE031EF908DA
Subject : CN=Administrator, CN=Users, DC=ad, DC=example, DC=net
PSPath : Microsoft.PowerShell.Security\Certificate::CurrentUs
er\My\BC85FF727E7DA52FC34AD92B75B7FE031EF908DA
PSParentPath : Microsoft.PowerShell.Security\Certificate::CurrentUs
er\My
PSChildName : BC85FF727E7DA52FC34AD92B75B7FE031EF908DA
PSDrive : Cert
PSProvider : Microsoft.PowerShell.Security\Certificate
PSIsContainer : False
EnhancedKeyUsageList : {Code Signing (1.3.6.1.5.5.7.3.3)}
DnsNameList : {Administrator}
SendAsTrustedIssuer : False
EnrollmentPolicyEndPoint : Microsoft.CertificateServices.Commands.EnrollmentEnd
PointProperty
EnrollmentServerEndPoint : Microsoft.CertificateServices.Commands.EnrollmentEnd
PointProperty
PolicyId : {360BD38F-E9BE-4724-86F3-65CD14FF86C9}
Archived : False
Extensions : {System.Security.Cryptography.Oid...}
FriendlyName :
IssuerName : System.Security.Cryptography.X509Certificates.X500Di
stinguishedName
NotAfter : 9/27/2022 9:35:21 PM
NotBefore : 9/27/2021 9:35:21 PM
HasPrivateKey : True
PrivateKey :
PublicKey : System.Security.Cryptography.X509Certificates.Public
Key
RawData : {48, 130, 3, 106...}
SerialNumber : 3A000000052570DD73698499F7000000000005
SubjectName : System.Security.Cryptography.X509Certificates.X500Di
stinguishedName
SignatureAlgorithm : System.Security.Cryptography.Oid
Thumbprint : BC85FF727E7DA52FC34AD92B75B7FE031EF908DA
Version : 3
Handle : 2611199710288
Issuer : CN=My Issuing CA, DC=ad, DC=example, DC=net
Subject : CN=Administrator, CN=Users, DC=ad, DC=example, DC=net
Yep—that seems about right! Now you can use it to sign code as described in the Signing Process
paragraph above.
Script Signing 431
15.5.2.1 GPO
Unfortunately, there are no AD DS containers to hold Code Signing certificates. One of the
possible alternatives for this task is Group Policies. In GPOs, certificates are stored as registry
keys. And that’s how they get onto target computers too—as registry keys. Therefore, in order
to import a certificate into a group policy object, import it as a registry key, with the help of
Set-GPRegistryValue. The registry keys that store certificate objects are binary, which means:
A nice touch is that the structure of these binary keys isn’t documented.
Before you proceed any further, understand that this is an unofficial way to import
certificates into a GPO—it might stop working at any moment. Unfortunately, in 2021
there is still no official command-line method to do this—Microsoft wants you to import
certificates into group policies through the MMC console³². If you’re fine with that, by
all means, go ahead and use the GUI instead of this unsupported command-line method.
Note that various third parties do have software that provides this support. This includes
SDM Software³³ and their Group Policy Automation Engine. Also please note that
pushing Certificates via Group Policy using Microsoft’s Group Policy Management
Console is not difficult and should be considered if you only have a few certificates
that need to be distributed.
Working with undocumented binary structures is always a challenge, but observe that a
certificate imported manually into the local machine store and the same certificate imported
through Group Policies yield identical registry values. That means that we can just get the value
from the local registry and set that value into a group policy object!
In the code below, $GPOName is the name of a Group Policy Object you want to use to
deploy the certificate and this object is already linked to a desired OU.
First, you need to get the thumbprint of your certificate for later use:
³²https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/deployment/distribute-certificates-to-client-computers-by-
using-group-policy
³³https://www.sdmsoftware.com
Script Signing 432
Note that the above only works properly if you have a single code signing certificate
stored for your user Otherwise you may need to view the Subject, NotBefore, and
NotAfter fields to identify the proper certificate.
Next, temporarily export the certificate into a file (only the public part) and import it back, but
into the local machine store this time. You must do this because personal user certificates reside
on the file system rather than in the registry. You can’t access them through HKEY_CURRENT_USER
or any other registry hive.
Example 72: Exporting the code signing certificate and reimporting it into the machine store
1 $ExportPath = Join-Path -Path $env:TEMP -ChildPath ([guid]::NewGuid().Guid)
2 Export-Certificate -Cert $Certificate -FilePath $ExportPath
3 $Params = @{
4 FilePath = $ExportPath
5 CertStoreLocation = 'Cert:\LocalMachine\Root\'
6 }
7 Import-Certificate @Params
8 Remove-Item -Path $ExportPath
Importing certificates into the machine store requires local administrator permission on
a computer.
The example doesn’t use the X509Store .NET class here to copy certificates be-
tween the stores because it encodes certificate objects differently than the GUI. It
still works, but just to be safe, you’ll want to mimic the GUI as closely as possible.
Import-Certificate helps with that, producing results identical to the GUI.
You can now get the certificate object from the local registry and add it into the GPO registry:
Script Signing 433
Example 73: Extracting the registry data for the certificate and creating a GPO registry value from it
1 $JPParams = @{
2 Path = 'HKLM:\SOFTWARE\Microsoft\SystemCertificates\ROOT\Certificates'
3 ChildPath = $Thumbprint
4 }
5 $Path = Join-Path @JPParams
6 $CertBlob = Get-ItemPropertyValue -Path $Path -Name 'Blob'
7 $GPOCertContainerPath = 'HKLM\SOFTWARE\Policies\Microsoft\SystemCertificates'
8 $Key = "$GPOCertContainerPath\TrustedPublisher\Certificates\$Thumbprint"
9 $SGParams = @{
10 Name = $GPOName
11 Key = $Key
12 ValueName = 'Blob'
13 Value = $CertBlob
14 Type = 'Binary'
15 }
16 Set-GPRegistryValue @SGParams
Example 74: Removing the certificate that was temporarily added to the machine store
1 Remove-Item -Path "Cert:\LocalMachine\Root\$Thumbprint"
The certificate has been added to the GPO and will be delivered to your computers soon.
The CertificateImport resource from the CertificateDsc module allows you to import certificates
not only into the Root store but into virtually any of them!³⁴ Here’s an example of a configuration
for code-signing certificates:
Example 75: A DSC resource for importing a code signing certificate from an SMB share
1 Configuration DeployMySigningCertificate
2 {
3 Import-DscResource -ModuleName CertificateDsc
4
5 Node localhost
6 {
7 CertificateImport MySigningCertificate
8 {
9 Thumbprint = '0000000000000000000000000000000000000000'
10 Location = 'LocalMachine'
11 Store = 'TrustedPublisher'
12 Path = '\\ad.example.net\NETLOGON\Administrator.crt'
13 }
14 }
15 }
This is much easier than with Group Policies and has no issues with support!
³⁴DSC Community. (2021, Feb. 26). Welcome to the CertificateDsc wiki. dsccommunity/CertificateDsc on GitHub. [Online]. Available:
https://github.com/dsccommunity/CertificateDsc/wiki. [Accessed: Sep. 15, 2022].
Script Signing 434
Intune doesn’t have native functionality to deploy certificates to the Trusted Publishers store.
However, the Intune Support team described how you can utilize custom configuration profiles³⁵
for this.
15.6 Summary
In this chapter, you have learned why you need to sign your code, how to do it, and how to
implement your own Public Key Infrastructure to sign an unlimited number of scripts for free.
In the next chapter, Script Execution Policies, you’ll see how you can use this knowledge to
protect your infrastructure even further, by utilizing PowerShell Execution Policies.
³⁵https://techcommunity.microsoft.com/t5/intune-customer-success/adding-a-certificate-to-trusted-publishers-using-intune/ba-
p/1974488
³⁶https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_signing
³⁷https://www.sysadmins.lv/blog-en/categoryview/securitypki.aspx
³⁸https://www.pkisolutions.com/author/crypt32/
³⁹https://en.wikipedia.org/wiki/Public-key_cryptography
⁴⁰https://en.wikipedia.org/wiki/Code_signing
16. Script Execution Policies
Script execution policies dictate the conditions under which PowerShell runs scripts and loads
configuration files and PowerShell modules. This chapter discusses in-depth how script execution
policies can be used and implemented. It will also give some use cases for each policy.
• AllSigned
• RemoteSigned
• Restricted
• Unrestricted
• Bypass
• Default
• Undefined
16.1.1 AllSigned
The AllSigned execution policy allows scripts to run, but requires all scripts and configuration
files to be signed by a trusted publisher, even those scripts, which you wrote yourself.
Signing a script involves using cryptographic signatures to provide information about the
integrity and authenticity of the file. The signature is checked at the time the script is run to
ensure that the contents of the script haven’t changed since signing and that the certificate used
to sign the script originates from a trusted source.
The intricacies of script signing are covered in the previous chapter, Script Signing.
The AllSigned policy also causes the shell to prompt you before running scripts from publishers
that you haven’t yet classified as trusted or untrusted. To designate a publisher as trusted, the
certificate that was used to sign the script must be in the Trusted Publishers certificate store, and
its root certificate must be in the Trusted Root Certification Authorities store. Documentation
on how to import a certificate is available on Microsoft Docs².
¹Microsoft. (2022, Apr. 20). System.Management.Automation - SecuritySupport.cs. L32-66. PowerShell/PowerShell on GitHub.
[Online]. Available: https://github.com/PowerShell/PowerShell/blob/master/src/System.Management.Automation/security/Security-
Support.cs. [Accessed: Aug. 03, 2022].
²https://learn.microsoft.com/en-us/windows-hardware/drivers/install/trusted-publishers-certificate-store
435
Script Execution Policies 436
It’s important to note that this policy isn’t a failsafe against malicious scripts, as the system still
permits a signed script to run, regardless of content. It’s not unheard of for disgruntled employees
to write innocuous-looking scripts designed to cause catastrophic damage to the environment.
These malicious scripts are called “logic bombs” and are a challenging security threat. Since
they rarely match any known malware signatures, they can avoid detection by antivirus and
antimalware software.
16.1.2 RemoteSigned
RemoteSigned is the default execution policy for Windows Server operating systems.³
It allows PowerShell scripts to run, but requires that PowerShell scripts and files downloaded
from the internet be signed by a trusted publisher. This includes scripts attached to emails or
downloaded from instant messaging apps.
RemoteSigned differs significantly from AllSigned in that it allows unsigned scripts on the
local computer to be run, and will permit running scripts downloaded from the internet provided
they’re not blocked. Files are blocked by Windows when the OS detects that they originate from
the internet zone (zone 3). PowerShell’s RemoteSigned policy blocks scripts and modules from
running if they aren’t signed by a trusted publisher. The behavior differs from AllSigned when
an unsigned file becomes unblocked. A file can be unblocked by anyone with write access to the
file.
Blocked files are files that originate from outside of the local intranet zone, files downloaded
from the internet will be blocked by default. There are two ways to unblock a file:
The first is to use the Windows Graphical User Interface (GUI) by right-clicking the file and
selecting ‘Properties’. In the ‘General’ tab, there’s a checkbox labeled ‘Unblock’.
³Microsoft. (2022, Mar. 18). About Execution Policies (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://
learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_execution_policies. [Accessed: Aug. 03, 2022].
Script Execution Policies 437
RemoteSigned is useful in cases where a user is an administrator or power user whose job
involves creating PowerShell scripts, as it allows running local scripts but still has some measures
in place to prevent running potentially harmful scripts downloaded from the internet.
16.1.3 Restricted
The Restricted execution policy is the default execution policy for Windows client
computers.⁴
⁴Microsoft. (2022, Mar. 18). About Execution Policies (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://
learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_execution_policies. [Accessed: Aug. 03, 2022].
Script Execution Policies 438
It permits running individual commands but doesn’t allow any scripts, configuration files,
module files, or PowerShell profiles to run, even when they’re signed. Restricted is, as its
name indicates, the most restrictive execution policy, and is most often used in high-security
environments, or on machines where PowerShell isn’t used for any processes.
16.1.4 Unrestricted
Unrestricted is the default policy for non-Windows systems, and can’t be changed.⁵
This policy allows unsigned scripts to run, but still warns the user before running scripts or
configuration files that don’t originate from the Local intranet zone.
Unrestricted execution policies should be avoided wherever possible. If a script file is known
to be safe or is part of a known process, confirm that the script isn’t being blocked, and that the
system it runs on has an execution policy of RemoteSigned or higher.
16.1.5 Bypass
Despite its name, the Bypass mode is even more permissive than the Unrestricted mode.
In Bypass mode, all scripts can run and there are no warnings nor prompts. This execution
policy isn’t one you are likely to encounter in most well-managed environments. It’s designed
for scenarios in which a PowerShell script is part of a larger application, or where PowerShell
itself is the foundation for an application that has its own security model and controls. Bypass
is the least restrictive execution policy.
16.1.6 Default
The Default execution policy is a bit different from the others. Default can be used as an input
for the Set-ExecutionPolicy cmdlet, but it’s not a valid output from Get-ExecutionPolicy.
The use case for Default lies in the Set-ExecutionPolicy cmdlet:
Example 2: Setting the execution policy to its default for that platform
Set-ExecutionPolicy -ExecutionPolicy Default
This command sets the execution policy based on the type of operating system you are working
with. For Windows client computers, the default policy is Restricted. For Windows servers,
the default policy is RemoteSigned.
16.1.7 Undefined
The Undefined policy means that there is no execution policy set within the current scope.
Without an execution policy set, the effective execution policy is the default for its respective
operating system, as outlined in the Default execution policy.
⁵Microsoft. (2022, Mar. 18). About Execution Policies (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available: https://
learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_execution_policies. [Accessed: Aug. 03, 2022].
Script Execution Policies 439
Running this command causes any existing execution policy to be unassigned, unless it’s
determined by Group Policy. If the execution policy in all scopes is Undefined, the effective
execution policy is Restricted.⁶
1. MachinePolicy
2. UserPolicy
3. Process
4. CurrentUser
5. LocalMachine
16.2.1.1 MachinePolicy
The MachinePolicy scope applies to all users on a machine. Users aren’t able to apply execution
policies to this scope; it can only be set using Group Policy. Setting execution policies using Group
Policy is covered in the AppLocker section of this chapter.
The machine policy for Windows PowerShell is stored in the registry at the following location:
⁶Microsoft. (2022, Mar. 08). Set-ExecutionPolicy (Microsoft.PowerShell.Security). Microsoft Docs. [Online]. Available: https://learn
.microsoft.com/en-us/powershell/module/microsoft.powershell.security/set-executionpolicy. [Accessed: Aug. 03, 2022].
⁷Microsoft. (2022, Mar. 18). About Execution Policies (Microsoft.PowerShell.Core) - Execution policy scope. Microsoft
Docs. [Online]. Available: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_execution_poli-
cies#execution-policy-scope. [Accessed: Aug. 03, 2022].
⁸Microsoft. (2022, Mar. 08). Set-ExecutionPolicy (Microsoft.PowerShell.Security) - Parameters. Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.security/set-executionpolicy#parameters. [Accessed: Aug. 03,
2022].
Script Execution Policies 440
HKLM\Software\Policies\Microsoft\Windows\PowerShell
The machine polcy for PowerShell 7.0 and later is in the following location:
HKLM\SOFTWARE\Policies\Microsoft\PowerShellCore
16.2.1.2 UserPolicy
UserPolicy applies to the current user of the computer. Like MachinePolicy, the execution
policy for this scope is defined by Group Policy.
The user policy for Windows PowerShell is stored in the registry at the following location:
HKCU\Software\Policies\Microsoft\Windows\PowerShell
The user policy for PowerShell 7.0 and later is in the following location:
HKCU\SOFTWARE\Policies\Microsoft\PowerShellCore
16.2.1.3 Process
Execution policies with the Process scope apply to the current PowerShell process. The policy
is stored in the following PowerShell session environmental variable until the session is closed,
at which point it’s deleted:
$ENV:PSExecutionPolicyPreference
16.2.1.4 CurrentUser
Policies with the CurrentUser scope apply to the current user of a machine, similar to
UserPolicy. The key difference is that permitted users can set the execution policy in the
CurrentUser scope using the Set-ExecutionPolicy cmdlet. When an execution policy has
this scope in Windows PowerShell, it’s stored in the registry in the HKEY_CURRENT_USER hive at
the following location:
HKCU\SOFTWARE\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell
In PowerShell 7.0 and later, it’s stored in the powershell.config.json configuration file in
the user’s Documents directory.⁹
⁹Microsoft. (2020, Nov. 27). System.Management.Automation - PSConfiguration.cs. L33-L66. PowerShell/PowerShell on GitHub.
[Online]. Available: https://github.com/PowerShell/PowerShell/blob/master/src/System.Management.Automation/engine/PSConf-
iguration.cs. [Accessed: Aug. 03, 2022].
Script Execution Policies 441
16.2.1.5 LocalMachine
An execution policy in the LocalMachine scope applies to all users of the machine, just like
MachinePolicy. Once again, users are able to change the execution policy of the scope using
Set-ExecutionPolicy, provided they have the permissions to do so. When an execution policy
has this scope in Windows PowerShell, it’s stored in the registry in the HKEY_LOCAL_MACHINE
hive at the following location:
HKLM\SOFTWARE\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell
In PowerShell 7.0 and later, it’s stored in the powershell.config.json configuration file in
the PowerShell installation directory.¹⁰
• Set-ExecutionPolicy cmdlet¹²
• Group Policy¹³
• Registry¹⁴ ¹⁵
• AppLocker¹⁶
• Windows Defender Application Control¹⁷
¹⁰Microsoft. (2020, Nov. 27). System.Management.Automation - PSConfiguration.cs. L33-L66. PowerShell/PowerShell on GitHub.
[Online]. Available: https://github.com/PowerShell/PowerShell/blob/master/src/System.Management.Automation/engine/PSConf-
iguration.cs. [Accessed: Aug. 03, 2022].
¹¹https://www.netspi.com/blog/technical/network-penetration-testing/15-ways-to-bypass-the-powershell-execution-policy/
¹²Microsoft. (2022, Mar. 08). Set-ExecutionPolicy (Microsoft.PowerShell.Security). Microsoft Docs. [Online]. Available: https://learn
.microsoft.com/en-us/powershell/module/microsoft.powershell.security/set-executionpolicy. [Accessed: Aug. 03, 2022].
¹³Microsoft. (2022, Mar. 18). About Group Policy Settings (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_group_policy_settings. [Accessed: Aug. 03,
2022].
¹⁴Microsoft. (2022, Mar. 08). Set-ExecutionPolicy (Microsoft.PowerShell.Security). Microsoft Docs. [Online]. Available: https://learn
.microsoft.com/en-us/powershell/module/microsoft.powershell.security/set-executionpolicy. [Accessed: Aug. 03, 2022].
¹⁵Microsoft. (2020, Nov. 27). System.Management.Automation - PSConfiguration.cs. L33-L66. PowerShell/PowerShell on GitHub.
[Online]. Available: https://github.com/PowerShell/PowerShell/blob/master/src/System.Management.Automation/engine/PSConf-
iguration.cs. [Accessed: Aug. 03, 2022].
¹⁶Microsoft. (2017, Sep. 21). Script rules in AppLocker. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/win-
dows/security/threat-protection/windows-defender-application-control/applocker/script-rules-in-applocker. [Accessed: Oct. 12, 2021].
¹⁷Microsoft. (2022, Apr. 25). Understand Windows Defender Application Control (WDAC) policy rules and file rules. Mi-
crosoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-
control/select-types-of-rules-to-create. [Accessed: Aug. 03, 2022].
Script Execution Policies 442
16.4.1 Set-ExecutionPolicy
To change the execution policy for a scope, use the Set-ExecutionPolicy cmdlet:
This cmdlet only adjusts the execution policy in the process, current user, or local machine
scope. You must use Group Policy to set the MachinePolicy or UserPolicy. As explained in
the Scope Precedence section, execution policies defined by Group Policy Objects override the
configuration changes made using the Set-ExecutionPolicy cmdlet.
The following example shows the registry key for Windows PowerShell’s execution
policy. For PowerShell 7.0 and later, read ahead.
The registry value in the example correlates with the settings from the Windows PowerShell
Group Policy template:¹⁸
¹⁸Microsoft. (2022, Mar. 18). About Group Policy Settings (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_group_policy_settings. [Accessed: Aug. 03,
2022].
Script Execution Policies 443
Computer Configuration\
Administrative Templates\
Windows Components\
Windows PowerShell
The same registry path and Group Policy configuration path can be used under HKEY_CURRENT_-
USER and User Configuration, respectively.
It’s also possible to manage the execution policy on machines using only registry changes,
however this isn’t advisable. Settings defined in Group Policy override registry keys if they
conflict. The registry keys themselves, on the other hand, can be changed by anyone with write
access to the registry. If the registry keys are changed, the changes persist until the next Group
Policy refresh. By default, Group Policy is refreshed on a computer every 90 minutes, with a
random offset of 30 minutes. This method can be useful if an administrator needs to make the
execution policy more lax temporarily, but it lacks utility outside of ad hoc cases.
PowerShell 7.0 and later uses different Group Policy settings and registry keys.¹⁹ It comes with
additional Group Policy templates that you can install by running the following script from the
PowerShell installation directory:
Computer Configuration\
Administrative Templates\
PowerShell Core
User Configuration\
Administrative Templates\
PowerShell Core
These contain the same configuration options as with Windows PowerShell, and set values in
the following registry keys:
HKLM\SOFTWARE\Policies\Microsoft\PowerShellCore
HKCU\SOFTWARE\Policies\Microsoft\PowerShellCore
In the above locations, HKLM is the HKEY_LOCAL_MACHINE hive and HKCU is the HKEY_CURRENT_-
USER hive.
¹⁹Microsoft. (2022, Mar. 18). About Group Policy Settings (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_group_policy_settings. [Accessed: Aug. 03,
2022].
Script Execution Policies 444
16.4.3 AppLocker
For a more robust defense against threat actors, you can use AppLocker to restrict a user’s ability
to execute scripts. Using AppLocker, you can prevent scripts from running based on a wide array
of criteria, including (but not limited to):²⁰
It’s a good idea to leave the default rules in place, as not all .ps1 files shipped with Windows
are signed. The AppLocker service, AppIDSvc, doesn’t run by default. You have to start it on
each computer you want to protect, and sometimes you need to sign out and back in before
AppLocker rules will apply to a session. You can use Group Policy Preferences or Group Policy
Policies inside of a GPO to set “Automatic” as the service start type.
A basic use case for AppLocker and Execution policies can be illustrated using the following
example: you want to prevent all users from running scripts that aren’t signed by Microsoft.
Note: to specify a trusted or untrusted publisher, you must have a .ps1 file signed by the publisher
to use as a reference file.
To do this, you would create a GPO with the following settings:
• Computer Policies\Windows Settings\Security Settings\
Application Control Policies\AppLocker\Script Rules
This configuration would set the MachinePolicy on computers affected by the GPO to All-
Signed, which would prevent running any scripts unless they’re signed by a trusted publisher.
You can read more about AppLocker in the Constrained Language Mode chapter.
WDAC policies apply to the computer as a whole, and unlike AppLocker, there’s no option
to apply a policy only to certain users. WDAC is also currently being actively developed by
Microsoft, while AppLocker only receives security fixes. Microsoft recommends using WDAC
policies rather than AppLocker if possible, but this decision depends highly on your environment.
It’s possible to deploy both, and use AppLocker as a complement to WDAC. For instance, using
WDAC for machine baselines while deploying AppLocker policies for more granular control on
a user level.
You can enable code integrity rule options using the Set-RuleOption cmdlet. For example,
to restrict both kernel-mode and user-mode binaries, you can use -Option 0, also known as
Enabled:UMCI.²²
To enable PowerShell script enforcement, use -Option 11 with the -Delete parameter, also
known as Disabled: Script Enforcement.
A full list of WDAC policy rule options is available on Microsoft Docs²³. You can read more about
WDAC in the Constrained Language Mode chapter.
³¹https://learn.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/applocker/
applocker-overview
³²https://learn.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/wdac-and-
applocker-overview
³³https://www.netspi.com/blog/technical/network-penetration-testing/15-ways-to-bypass-the-powershell-execution-policy/
17. Constrained Language Mode
Constrained Language Mode (CLM) is a PowerShell Language Mode used by the PowerShell
Remoting Session Configuration and the PowerShell Default (console) Runspace. Constrained
Mode provides an out-of-the-box solution to console-based security by restricting the scope of
non-approved scripts, reducing the risk of malicious code execution within a console session.
This chapter explores several topics:
Finally, the chapter describes how to implement CLM with AppLocker and Windows Defender
Application Control (WDAC).
17.1 In Depth
What is Constrained Language Mode?
First introduced in PowerShell 3.0, Constrained Language Mode was initially a mechanism for
Windows Defender Application Control (WDAC), formerly User-mode Code Integrity (UMCI),
to manage PowerShell’s default Runspace (console) on Windows 8.1 RT. Following Windows 8.1,
subsequent PowerShell versions included CLM as a means to provide console-based security.
WDAC/AppLocker is a top-level security model in the Windows Shell within Microsoft’s Trusted
Boot Process,¹ in which Windows uses these to secure the top-level ‘Shell’ of Windows. Once a
system enforces any WDAC/AppLocker script rules, the PowerShell (including PowerShell Core)
console session will start in Constrained Language Mode.
• FullLanguage: The default mode. This mode permits all language elements.
• ConstrainedLanguage: Allows all language elements but limits permitted types.
• RestrictiveLanguage: Restricted use of language elements, limited variable usage.
• NoLanguage: No language elements are available.
447
Constrained Language Mode 448
FullLanguage
The LanguageMode property is settable only in the FullLanguage Language Mode. You can use
this to test scripts for potential issues that may arise. The example below demonstrates changing
the LanguageMode from FullLanguage to ConstrainedLanguage, and back again. Note that
the console throws an error when attempting to reset the property.
FullLanguage
ConstrainedLanguage
You should only use the Session State for debugging or testing since it’s not enforceable.
Constrained Language Mode limits the use of types within the session. It permits the following
types:
Constrained Language Mode 449
Users can get or set allowed properties, invoke methods, and convert objects to the type.
Constrained Language Mode permits the following Component Object Model (COM) objects:
• Scripting.Dictionary
• Scripting.FileSystemObject
• VBScript.RegExp
As with types, users can get properties but can only set properties on core types.
It’s also important to remember that the class statement isn’t available in Constrained Language
Mode.
Native Win32 executables (such as ping.exe, net.exe, gpupdate.exe) will continue to function as
expected. However, they’re subject to AppLocker/WDAC Application Policies (if implemented).
Allowed types are accessible in a static class named CoreTypes in the
Constrained Language Mode 450
ConstrainedLanguage
17.1.2.2 Modules
When importing PowerShell scripts or modules, cmdlets will inherit the parent Language Mode.
For example:
²Microsoft. (2021, Jan. 08). System.Management.Automation - TypeResolver.cs. L706. [Online]. Available: https://github.com/
PowerShell/PowerShell/blob/master/src/System.Management.Automation/engine/parser/TypeResolver.cs. [Accessed: Oct. 12, 2021].
³Microsoft. (2015, Jul. 20). internal - C# Reference. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/dotnet/
csharp/language-reference/keywords/internal. [Accessed: Oct. 12, 2021].
Constrained Language Mode 451
InvalidOperation: C:\Windows\System32\WindowsPowerShell\v1.0\Modules\
Test.psm1:4
Line |
4 | [System.Net.WebClient]::new().DownloadFile(
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| Cannot create type. Only core types are supported in this language mode.
1 # Execution
2 Import-Module "SamplePowerShellModule.psd1"
3
4 Get-Something -Name 'Test'
Test Object
-----------
Test
Constrained Language Mode permits Windows PowerShell Script Workflows, but XAML work-
flows aren’t allowed (using Invoke-Expression -Language XAML). It permits nested work-
flows but not calling other workflows. It’s considered bad practice to implement due to its
language limitations.
Constrained Language Mode 452
17.1.2.4 Scripts
17.1.2.5 New-Object
Constrained Language Mode permits object instantiation but limits it to allowed types.
You can find Language Mode functionality embedded directly into the source code for New-
Object.⁵
Example 6: Source code for the New-Object cmdlet showing Language Mode recognition
1 //protected override void BeginProcessing()
2
3 if (Context.LanguageMode == PSLanguageMode.ConstrainedLanguage)
4 {
5 if (!CoreTypes.Contains(type))
6 {
7 ThrowTerminatingError(...
8 }
9 }
if (!isAllowed)
{
ThrowTerminatingError(...
return;
}
}
The example below demonstrates the attempted creation of a non-approved type, resulting in an
exception:
⁴Microsoft. (2017, Sep. 21). Script rules in AppLocker. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/win-
dows/security/threat-protection/windows-defender-application-control/applocker/script-rules-in-applocker. [Accessed: Oct. 12, 2021].
⁵Microsoft. (2021, Jan. 08). Microsoft.PowerShell.Commands.Utility - New-Object.cs. PowerShell/PowerShell on GitHub. [On-
line]. Available: https://github.com/PowerShell/PowerShell/blob/master/src/Microsoft.PowerShell.Commands.Utility/commands/utility/
New-Object.cs. [Accessed: Oct. 12, 2021].
Constrained Language Mode 453
The [Object]::New() method is also allowed for approved type names only.
17.1.2.6 Add-Type
Add-Type can load signed C# assemblies in Constrained Language Mode. It doesn’t permit
unsigned assemblies or Win32 APIs, however.
It’s important to remember that type limitations still exist, and CLM will prevent the loading of
signed DLLs if they define new types.
Constrained Language Mode permits type and string conversion only when the converted type is
an Allowed Type. In the example below, you can see type conversion from a string to DateTime
working in Constrained Language Mode:
True
False
False
True
In the following example, you can see an instance of a type conversion from string to fake,
resulting in an error.
String
• Malware can exist within the constraints of Constrained Language Mode by limiting its
usage to cmdlets. In this instance, it’s possible to use Invoke-WebRequest to download
malware over HTTP and use New-ScheduledTask and Register-ScheduledTask to
initialize the code. Therefore, it’s vital that you also deploy Application Control.
• Constrained Language Mode is only active in the PowerShell Process. Added PowerShell
Assemblies won’t load in Constrained Language Mode.
• You can hypothetically bypass Constrained Language Mode on Administrative accounts.
Restrict non-essential administrative access to only privileged accounts.
Constrained Language Mode 456
17.3.1 GetLockdownPolicy()
GetLockDownPolicy() makes up the primary segment of the code, testing the WLDP and
AppLocker policies. It tests them in the following order:
17.3.2 GetWldpPolicy()
This method calls and caches the result from the WldpGetLockdownPolicy() function (used by
wldp.dll).¹¹ WDAC uses WLDP to define the configuration state.
17.3.3 GetAppLockerPolicy()
GetAppLockerPolicy() evaluates the currently implemented system and user AppLocker
Policies. It achieves this by:
1. Creating an AppLocker test file name (.psd1 & .psm1) inside the user’s %Temp% directory.
If the %Temp% directory isn’t accessible, it will try \AppData\LocalLow\Temp.
2. Returning an enforcement policy to the caller.
17.3.4 GetDebugLockdownPolicy()
GetDebugLockdownPolicy() is an override setting for debugging purposes.
It consists of:
• Loose file matching. The system trusts all scripts that reside inside of a directory named
‘System32’.
• The __PSLockdownPolicy environment variable. The value of this variable affects the
Language Mode.
This applies to all scripts that reside in any directory named ‘System32’. The Contains()
method allows for matching in any parent or subdirectory.
GetDebugLockdownPolicy(string path) uses this variable for debugging, and you shouldn’t
use it in production. Setting _PSLockdownPolicy to 0 won’t override existing enforced WDAC
or App Locker policies.
The policy settings are:¹² ¹³
¹¹Microsoft. (2018, May. 31). WldpGetLockdownPolicy function. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-
us/windows/win32/devnotes/wlpdgetlockdownpolicy. [Accessed: Oct. 12, 2021].
¹²Microsoft. (2021, Jan. 10). System.Management.Automation - wldpNativeMethods.cs. L494-504. [Online]. Available: https://
github.com/PowerShell/PowerShell/blob/master/src/System.Management.Automation/security/wldpNativeMethods.cs. [Accessed: Oct.
12, 2021].
¹³Microsoft. (2021, Jan. 10). System.Management.Automation - wldpNativeMethods.cs. L438-454. [Online]. Available: https://
github.com/PowerShell/PowerShell/blob/master/src/System.Management.Automation/security/wldpNativeMethods.cs. [Accessed: Oct.
12, 2021].
Constrained Language Mode 458
17.4.1 Introduction
You can manage AppLocker with Group Policy at the following path:
1. Executable rules: These control which executables (*.exe, *.com) can and can’t run.¹⁴
¹⁴Microsoft. (2017, Sep. 21). Executable rules in AppLocker. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-
us/windows/security/threat-protection/windows-defender-application-control/applocker/executable-rules-in-applocker. [Accessed: Oct.
12, 2021].
Constrained Language Mode 459
2. Windows Installer rules: These control permitted installation packages (*.msi, *.msp,
*.mst).¹⁵
3. Script rules: These define permitted script formats (*.ps1, *.bat, *.cmd, *.vbs,
*.js) and files. PowerShell Module files (*.psm1) and PowerShell Module Manifest files
(*.psd1) remain unaffected.¹⁶
4. DLL rules: These control locations from which modules and libraries (*.dll, *.ocx) can
run and by which users. These rules may affect system stability and performance.¹⁷
5. Packaged app rules: These control which Universal Windows Platform (UWP) apps and
installers can run.¹⁸
You must also configure and enforce the rules by enabling each definition type within the
AppLocker Properties. You can find the DLL rules under the Advanced tab. AppLocker uses an
allowlist approach, enforcing all definition types. AppLocker supports policy ‘Auditing’ to test
policy definitions before enforcement, reducing the impact on users.
If you are considering implementing AppLocker within your organization, please review
the AppLocker Design Guide.¹⁹
AppLocker rules have three primary conditions used for evaluation. These rules are in order of
best approach:
1. (Recommended) Publisher: Code-signed scripts are the recommended option, as this ap-
proach offers the most flexibility and security.
2. File hash: File hashes prevent modification to code but require changes to the policy as the
file changes.
3. (Not-Recommended) File path: Rule application depending on the file or directory path or
name. This approach doesn’t ensure the integrity of the file since there are no mechanisms
to test the identity of files.
After making any changes to AppLocker, you should refresh Group Policy by running
gpupdate /force.
¹⁵Microsoft. (2017, Sep. 21). Windows Installer rules in AppLocker. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-
us/windows/security/threat-protection/windows-defender-application-control/applocker/windows-installer-rules-in-applocker.
[Accessed: Oct. 12, 2021].
¹⁶Microsoft. (2017, Sep. 21). Script rules in AppLocker. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/win-
dows/security/threat-protection/windows-defender-application-control/applocker/script-rules-in-applocker. [Accessed: Oct. 12, 2021].
¹⁷Microsoft. (2017, Sep. 21). DLL rules in AppLocker. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/win-
dows/security/threat-protection/windows-defender-application-control/applocker/dll-rules-in-applocker. [Accessed: Oct. 12, 2021].
¹⁸Microsoft. (2017, Oct. 10). Packaged apps and packaged app installer rules in AppLocker. Microsoft Docs. [Online]. Avail-
able: https://learn.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/applocker/packaged-
apps-and-packaged-app-installer-rules-in-applocker. [Accessed: Oct. 12, 2021].
¹⁹Microsoft. (2017, Sep. 21). AppLocker design guide. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/win-
dows/security/threat-protection/windows-defender-application-control/applocker/applocker-policies-design-guide. [Accessed: Oct. 12,
2021].
Constrained Language Mode 460
Before creating and configuring rules, you must enable the ‘Application Identity’ Windows
Service through Group Policy, under the following path:
To enable the service with PowerShell, start a session as an Administrator and enter
Start-Service -Name 'Application Identity'.
AppLocker provides several tools to aid in usability. When enabling AppLocker for the first time,
allow it to create the default rules.
On the right-hand side of the window, you can see that AppLocker has created and enabled the
default script rules. The default rules are:
1. Allow All Users to run Scripts in the “Program Files” directory, defined by the %PROGRAM-
FILES%\* PATH variable.
2. Allow All Users to run Scripts in the “Windows” directory, defined by the %WINDIR%\*
PATH variable.
Constrained Language Mode 461
You should remove these rules once you have your custom rules defined to prevent them
from becoming a “catch-all”.
Another alternative is to generate the rules automatically. This process scans directories and
creates rules from the analyzed files within them. The wizard preferentially applies the rules in
order: Publisher rules, File Hash rules, followed by File Path rules.
Creating custom rules is the preferred option providing granular scope over the target environ-
ment.
1. Under Reference file, Select Browse… and locate the *.ps1 file.
2. You can assign different security levels, ranging from least restrictive (top) to most restric-
tive (bottom). Each level includes the requirements from the previous one. These are:
• (Lowest) Publisher: Requires the issuer of the file’s certificate to adhere to the current
value.
• Product name: Requires the signed file’s product name to adhere to the current value.
• File name: Requires the signed file’s file description to adhere to the current value.
• (Highest) File version: Requires the signed file’s version to adhere to the current value,
and optionally higher or lower.
You can customize the fields further by selecting Use custom values.
Drag the slider to the desired level. It’s recommended to use the Product Name for an
internal self-signed certificate.
Constrained Language Mode 466
For PowerShell scripts, AppLocker populates only the Publisher field from the reference
file.
1. Select Browse Files… or Browse Folders… and select the file/directory you wish to add.
2. Select Next >.
3. Add any exceptions to the rule. Exceptions follow the AppLocker conditions (File, Publisher,
Hash).
4. Enter a Name: and (optionally) Description and Select Create.
1. Select Browse Files… or Browse Folders… and file/directory you wish to add. By selecting
a directory, the wizard will enumerate all *.ps1 files in the directory. Note: This isn’t a
recursive search.
2. Enter a Name: and (optionally) Description and Select Create.
Before policy enforcement, you can test changes within your organization without affecting
end-users.²⁰ AppLocker raises events in the Event Log under Application and Services
Logs\Microsoft\Windows\AppLocker. For more information on Event IDs, please refer to
Using Event Viewer with AppLocker²¹.
To enable policy Auditing:
WDAC under the hood functions the same as AppLocker, yet policy enforcement isn’t editable
using AppLocker Group Policy and uses WLDP to query the state. AppLocker is limited to on-
premise endpoints on the local domain using Group Policy, whereas WDAC is deployable to
cloud and on-premise endpoints. WDAC supports the following endpoints:
• Mobile Device Management (MDM) Solutions (such as Microsoft Intune). This solution only
supports Windows 10 Devices.
• Microsoft Endpoint Configuration Manager (MECM)
• Scripting Solution
• Group Policy
Similar to AppLocker, it’s mandatory to enable Policy Auditing before enforcement. Otherwise,
end-users may not be able to use apps/programs.
The following section focuses on deploying WDAC using Microsoft Intune. For more deployment
methodologies, please see the WDAC Deployment Guide²².
These settings will change over time as Microsoft adds or changes features.
17.6.1 Prerequisites
8. Select Create
16. Intune offers additional attribute-based filtering, which you can apply. Add any additional
rules as required.
17. Select Next
18. Select Create
If the policy type is set to enforce, the onboarded machine will apply the WDAC policy, and
PowerShell will start in Constrained Language Mode.
³¹https://learn.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/applocker/script-
rules-in-applocker
³²https://devblogs.microsoft.com/powershell/windows-powershell-2-0-deprecation/
³³https://techcommunity.microsoft.com/t5/iis-support-blog/windows-10-device-guard-and-credential-guard-demystified/ba-
p/376419
³⁴https://learn.microsoft.com/en-us/windows/security/threat-protection/device-guard/introduction-to-device-guard-virtualization-
based-security-and-windows-defender-application-control
³⁵https://learn.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/applocker/
applocker-policies-deployment-guide
³⁶https://learn.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/applocker/
configure-an-applocker-policy-for-audit-only
³⁷https://learn.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/applocker/using-
event-viewer-with-applocker
³⁸https://learn.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/windows-defender-
application-control-deployment-guide
18. Just Enough Administration
18.1 Introduction
Just Enough Administration (JEA) is the last security pillar of the PowerShell security ecosystem.
JEA is a configuration framework that enables IT to deploy and use PowerShell remoting
in secure environments. JEA utilizes PowerShell session configuration to define PowerShell
remoting runspaces. This chapter will explore the use of JEA to secure a PowerShell remoting
endpoint.
18.1.1 Requirements
This chapter assumes that you have a rudimentary understanding of Desired State Configuration
(DSC). Please refer to the Infrastructure as Code chapter, which describes DSC.
Both Windows PowerShell and PowerShell (Core) running on Windows support JEA.
This chapter refers to Windows PowerShell versions through 5.1 as Windows PowerShell
and versions of the cross-platform edition (formerly PowerShell Core) beginning with 6.0 as
PowerShell (Core).
• Language Mode.
• Execution Policy.
• Preloaded or defined aliases, assemblies, functions, or modules.
• Cmdlet allowlisting.
• Cmdlet denylisting.
• Runtime accounts.
• Logging.
• User drives.
• Versioning.
• Granular Control over Parameter and values.
474
Just Enough Administration 475
You can find the PowerShell session configuration cmdlets in the Microsoft.PowerShell.Core
module.³ They include:
This chapter explores creating, registering, updating, and auditing WSMan plugins.
³Microsoft. (2020, Nov. 17). Microsoft.PowerShell.Core. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/pow-
ershell/module/microsoft.powershell.core/. [Accessed: Oct. 14, 2021].
Just Enough Administration 477
# UserName: Michael.Zanatta
# Password: 123Password
#
# Header
Authorization = Basic TWljaGFlbC5aYW5hdHRhOjEyM1Bhc3N3b3Jk
During authentication, IWA primarily uses Kerberos, with NTLM serving as a backup. The
means of selecting which authentication type is:
– Kerberos: Used if the client is in a Windows Domain and the service isn’t:
* An IPv4 or IPv6 Address (192.168.1.1 or [2001:0db8:0000:0000:0000:ff00:0042:8329]).
* A Loopback Address (127.0.0.1 or [::1])
* Using localhost as the endpoint.
– NTLM: Authenticates within a Windows Domain with IP Addresses/Loopback and
serves as a backup.⁶
1. Validating whether the Certificate Authority is trusted, based on the Certificate Chain
contained within the certificate.
⁶Microsoft. (2021, Aug. 07). Security Considerations for PowerShell Remoting using WinRM. Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/scripting/learn/remoting/winrmsecurity. [Accessed: Oct. 28, 2021].
⁷Microsoft. (2020, Aug. 20). Security Support Providers (SSPs). Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-
us/windows/win32/rpc/security-support-providers-ssps-. [Accessed: Oct. 29, 2021].
⁸Microsoft. (2021, Aug. 01). Microsoft Negotiate. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/win-
dows/win32/secauthn/microsoft-negotiate. [Accessed: Oct. 29, 2021].
⁹Network Working Group. (2006, Jun.). RFC4559: SPNEGO-based Kerberos and NTLM HTTP Authentication in Microsoft Windows.
RFC Editor. [Online]. Available: https://www.rfc-editor.org/rfc/rfc4559. [Accessed: Oct. 27, 2021].
¹⁰Microsoft. (2021, Aug. 01). Credential Security Support Provider. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-
us/windows/win32/secauthn/credential-security-support-provider. [Accessed: Oct. 29, 2021].
¹¹Jordan Borean. (2018, Jan. 24). Demystifying WinRM. Blogging for Logging. [Online]. Available: https://www.bloggingforlogging
.com/2018/01/24/demystifying-winrm/. [Accessed: Oct. 28, 2021].
Just Enough Administration 479
Once the PowerShell Session is established, either PowerShell Session State encryption or
WinRM encryption is applied depending on whether the WinRM protocol is HTTP or HTTPS.
If the protocol is HTTP, message-level encryption based on the authentication protocol is
applied. For HTTPS, it uses TLS within the HTTP transport. The following table describes the
Authentication Type with the type of encryption using WinRM HTTP:
¹²Microsoft. (2020, Aug. 20). What is mutual TLS (mTLS)?. CloudFlare. [Online]. Available: https://www.cloudflare.com/en-us/learn-
ing/access-management/what-is-mutual-tls/. [Accessed: Nov. 01, 2021].
Just Enough Administration 480
Within the example, role capability files exist in the RoleCapabilities subdirectory of the
PowerShell module ModuleName.
¹³https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/new-psrolecapabilityfile
Just Enough Administration 481
• RoleCapabilities: These are the enumerated PSRC file names from preloaded PowerShell
modules.
• RoleCapabilityFiles: Any other PSRC files. PowerShell (Core) doesn’t require you to couple
PSRC files to a module, making it applicable to configuration outside of a module.
• Custom: You can define custom PSRC configurations associated with a group using
parameters instead of a PSRC file or loaded path. Using the cmdlet parameters reduces
management complexity at the cost of scalability. Values can be:¹⁴
1 [-ModulesToImport <Object[]>]
2 [-VisibleAliases <String[]>]
3 [-VisibleCmdlets <Object[]>]
4 [-VisibleFunctions <Object[]>]
5 [-VisibleExternalCommands <String[]>]
6 [-VisibleProviders <String[]>]
7 [-ScriptsToProcess <String[]>]
8 [-AliasDefinitions <IDictionary[]>]
9 [-FunctionDefinitions <IDictionary[]>]
10 [-VariableDefinitions <Object>]
11 [-EnvironmentVariables <IDictionary>]
12 [-TypesToProcess <String[]>]
13 [-FormatsToProcess <String[]>]
14 [-AssembliesToLoad <String[]>]
In the following example, the configuration references two PSRC files using the RoleCapabil-
ityFiles key:
¹⁴Microsoft. (2021, Sep. 27). New-PSSessionConfigurationFile (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Avail-
able: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/new-pssessionconfigurationfile. [Accessed: Oct. 14,
2021].
Just Enough Administration 482
It’s important to remember that the JeaRoleCapabilities must execute before the cre-
ation/registration of the session configuration. Otherwise, the DSC configuration will fail.
Please read the JEA Role Capabilities documentation¹⁶. This covers in-depth architec-
ture/design and implementation.
You can also use Find-RoleCapability to search registered repositories for PowerShell
role capabilities.
Please see Enable Azure Automation State Configuration¹⁷ for more details on DSC onboarding
for Azure Automation.
¹⁶https://learn.microsoft.com/en-us/powershell/scripting/learn/remoting/jea/role-capabilities
¹⁷https://learn.microsoft.com/en-us/azure/automation/automation-dsc-onboarding
Just Enough Administration 484
Example 6: The DSC server configuration used for the examples in this chapter
1 # ServerConfigurationData.psd1
2 @{
3
4 AllNodes = @(
5 @{
6 NodeName = "DC01"
7 Role = "DomainController"
8 PSRemotingEnabled = $true
9 PSRemotingConfigurationType = @(
10 "HelpdeskResetPassword",
11 "DNSManagement"
12 )
13 },
14 @{
15 NodeName = "FS01"
16 Role = "FileSrver"
17 PSRemotingEnabled = $false
18 },
19 @{
20 NodeName = "HRBW"
21 Role = "HybridRunbookWorker"
22 PSRemotingEnabled = $true
23 PSRemotingConfigurationType = "NoRestrictions"
24 }
25 )
26 }
1. Ansible/Chef/Desired State Configuration (DSC): This chapter uses DSC for examples.
2. Group Policy: You can use Group Policy to enable or disable PowerShell remoting.
3. SCCM/Intune: This is another good option for deploying PowerShell remoting to machines;
incorporated into the machine deployment script.
4. Azure Arm/Terraform/Cloud Formation/Puppet: Many tools allow the deployment of
PowerShell using Infrastructure as Code (IaC).
You can use Enable-PSRemoting -Force to enable PowerShell via the console. How-
ever, these examples use DSC.
As an alternative, this chapter demonstrates the JeaDSC resource, which only requires JeaSes-
sionConfiguration to create and register PowerShell session configurations.
JeaDsc is a Microsoft DSC resource that allows you to deploy session configurations on multiple
machines. The following are prerequisites for JeaDsc.
Once the JeaDsc configuration is ready, you can apply it to endpoints using the JeaRoleCapabil-
ities and JeaSessionConfiguration resources. For more information on JeaRoleCapabilities,
please review the Implementing Windows PowerShell Role Capabilities in the Console section.
The JeaSessionConfiguration resource is similar to New-PSSessionConfigurationFile.
However, some properties (for example, RoleDefinitions) require a [Hashtable] wrapped as
a [String].
That is:
RoleDefinitions = "@{RoleDefinitions}"
In the following scenario, management has decided that all helpdesk staff must be able to restart
services and stop processes. Below is an example of the JeaDsc PowerShell session configuration
for this:
¹⁸https://learn.microsoft.com/en-us/azure/automation/tutorial-configure-servers-desired-state
Just Enough Administration 487
Example 8: DSC session configuration enabling two capabilities for helpdesk staff
1 Configuration JEAMaintenance
2 {
3 Import-DscResource -Module JeaDsc
4
5 # Apply the session configuration to only the machines
6 # that have PSRemoting Enabled
7 Node $AllNodes.Where{$_.PSRemotingEnabled}.NodeName {
8
9 # Define the first resource
10 JeaRoleCapabilities ServiceMaintenanceCapability {
11
12 Path = "C:\Program Files\WindowsPowerShell\Modules\" +
13 "Demo\RoleCapabilities\ServiceMaintenance.psrc"
14 VisibleCmdlets = "Restart-Service", "Get-Service"
15 Description = "This role enables users to get/restart any service"
16 # Author/ComanyName/GUID/Copyright are not supported.
17
18 }
19
20 # Define the second resource
21 JeaRoleCapabilities ProcessMaintenanceCapability {
22
23 Path = "C:\Program Files\WindowsPowerShell\Modules\" +
24 "Demo\RoleCapabilities\ProcessMaintenance.psrc"
25 VisibleCmdlets = "Get-Process", "Stop-Process"
26 Description = "This role enables users to stop/start processes"
27 # Author/ComanyName/GUID/Copyright are not supported.
28
29 }
30
31 JeaSessionConfiguration HelpDeskManagmenetEndpoint
32 {
33 Name = 'JEAMaintenance'
34 RunAsVirtualAccount = $true
35 Ensure = 'Present'
36 DependsOn =
37 '[JeaRoleCapabilities]ServiceMaintenanceCapability',
38 '[JeaRoleCapabilities]ProcessMaintenanceCapability'
39 RoleDefinitions = "@{
40 'Contoso\ServiceMaintenanceCapability' = @{ RoleCapabilities =
41 'ServiceMaintenanceCapability'}
42 'Contoso\ProcessMaintenanceCapability' = @{ RoleCapabilities =
43 'ProcessMaintenanceCapability'}
44 }
45 "
46 TranscriptDirectory = 'C:\Temp\Transcripts'
47 }
48
49 }
50
51 }
Changes to the PowerShell session configuration will cause the WinRM service to restart,
disconnecting any open PowerShell sessions.
The DSC module functions as a wrapper for the PowerShell session configuration, so most
parameters within the cmdlet apply to the resource. JeaDsc has the following properties:
Just Enough Administration 488
• [String] RoleDefinitions: Defines the role definition map for the endpoint. Requires a
[Hashtable] wrapped as a [String]
Syntax:
1 RoleDefinitions = @'
2 @{
3 DOMAIN\User|Group =
4 @{ RoleCapabilities = "Setting" }
5 }
6 '@
• [String] SessionType: Specifies the type of session that PowerShell should create.
Values can be:
– Empty: No modules added to the session. Use for creating custom sessions.
– Default: Adds Microsoft.PowerShell.Core. Includes Import-Module.
– RestrictedRemoteServer: Includes Exit-PSSession, Get-Command, Get-
FormatData, Get-Help, Measure-Object, Out-Default, and Select-Object.
Syntax:
¹⁹https://learn.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/getting-started-with-group-
managed-service-accounts
Just Enough Administration 489
1 SessionType = 'RestrictedRemoteServer'
• [Bool] MountUserDrive: Configures sessions that use this configuration to expose the
User: PSDrive.
Syntax:
1 MountUserDrive = $true
1 UserDriveMaximumSize = 524288000
1 '
2 @{
3 And = "RequiredGroup1",
4 @{ Or = "OptionalGroup1", "OptionalGroup2" }
5 }
6 '
1 "
2 'CustomModule',
3 @{
4 ModuleName = 'CustomModuleName';
5 ModuleVersion = '1.0.0.0';
6 GUID = 'GUID'
7 }
8 "
1 FunctionDefinitions = "@{
2 Name = 'Do-Something';
3 ScriptBlock = {
4 param($MyInput)
5 $MyInput
6 }
7 }"
1 VariableDefinitions = "@{
2 Name = 'Variable1';
3 Value = { # Dynamic Value }
4 },
5 @{
6 Name = 'Variable1';
7 Value = 'Static'
8 }"
For more information on the JeaDsc Module, refer to the JeaDsc documentation²⁰.
• Use Active Directory groups to define Administrator permission scopes. Direct user assign-
ment to configuration adds complexity to the solution.
• When using Active Directory groups, describe the RoleDefinition, capability, and machine.
For instance, PSHRemoting-<ComputerName>-<Capability> becomes: PSHRemoting-
DC1-ServiceMaintenance. There are also limitations within the RoleDefinitions
parameter/property in that you can’t assign duplicate Active Directory groups to separate
capabilities. Always ensure that the capability name matches the Active Directory group.
• Implement Role-Based Access Control (RBAC) to prevent direct assignment to Active
Directory groups. Leveraging RBAC roles and groups ensures that user roles themselves
receive PowerShell remoting session configuration. The result is that the role governs user
access instead of a group. Owners can assign/remove access according to approvals.
• Simplify each role capability. Rather than having a single role capability for a specific
function, it’s better to practice simplifying each role capability down to a single instance. For
example, a helpdesk might require the ability to query/stop processes and query/stop/start
services on an application server. Breaking this down reveals two role capabilities:
1. Query/stop processes
2. Query/stop/start services
You can add these role capabilities to the PowerShell session configuration file:
15 'Contoso\PSHRemoting-APP1-ProcessMaintenance' = @{
16 RoleCapabilities = 'ProcessMaintenance'}
17 }
18 "
19 }
• Use a DSC Compiler to interpolate the configuration for each server. Interpolating DSC
configuration into each node reduces node management complexity and enables different
session configurations and role capabilities to be modular and reusable within different
configurations. Datum²¹ is a DSC Compiler that uses YAML configuration to link DSC
configurations.
• When defining the WSMan configuration, Kerberos is considered the best authentication
mechanism, providing mutual end-to-end authentication, as well as encryption. NTLM
is considered the second-best authentication method. However, it doesn’t support mutual
authentication.
• When defining a PowerShell session configuration, consider the security use cases for that
configuration and simplify against each use case.
• Enforce the use of the PowerShell Language Mode. Always select the most restrictive
language mode possible without compromising on usability. The language modes are (from
most to least restrictive):
²¹https://github.com/gaelcolas/datum
Just Enough Administration 494
Name : microsoft.powershell
PSVersion : 5.1
StartupScript :
RunAsUser :
Permission : NT AUTHORITY\NETWORK AccessDenied,
NT AUTHORITY\INTERACTIVE AccessAllowed, BUILTIN\Administrators
AccessAllowed, BUILTIN\Remote Management Users AccessAllowed
Name : microsoft.powershell.workflow
PSVersion : 5.1
StartupScript :
RunAsUser :
Permission : NT AUTHORITY\NETWORK AccessDenied,
BUILTIN\Administrators AccessAllowed, BUILTIN\Remote Management
Users AccessAllowed
Name : microsoft.powershell32
PSVersion : 5.1
StartupScript :
RunAsUser :
Permission : NT AUTHORITY\NETWORK AccessDenied,
NT AUTHORITY\INTERACTIVE AccessAllowed, BUILTIN\Administrators
AccessAllowed, BUILTIN\Remote Management Users AccessAllowed
You can view the entire object structure by piping the results into Format-List. The example
below shows the first registered session configuration and its properties.
For the sake of readability, this example excludes some object properties.
RunAsPassword :
Capability : {Shell}
PSVersion : 5.1
AutoRestart : false
ExactMatch : False
RunAsVirtualAccount : false
...
MaxShells : 2147483647
SupportsOptions : true
lang : en-US
MaxIdleTimeoutms : 2147483647
Enabled : true
...
Name : microsoft.powershell
XmlRenderingType : text
...
WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Plugin\
microsoft.powershell\InitializationParameters
ParamName ParamValue
--------- ----------
startupscript C:\TEMP\Startup.ps1
Startup!
[localhost]: PS C:\>
There are limitations to updating the existing session configurations, as some parameters aren’t
present within the cmdlet. See the parameter listing below:²²
1 Set-PSSessionConfiguration
2 [-Name] <String>
3 [-RunAsCredential <PSCredential>]
4 [-ThreadOptions <PSThreadOptions>]
5 [-AccessMode <PSSessionConfigurationAccessMode>]
6 [-UseSharedProcess]
7 [-StartupScript <String>]
8 [-MaximumReceivedDataSizePerCommandMB <Double>]
9 [-MaximumReceivedObjectSizeMB <Double>]
10 [-SecurityDescriptorSddl <String>]
11 [-ShowSecurityDescriptorUI]
12 [-Force]
13 [-NoServiceRestart]
14 [-TransportOption <PSTransportOption>]
15 -Path <String>
16 [-WhatIf]
17 [-Confirm]
18 [<CommonParameters>]
To change other properties, you should update the .pssc file generated by
New-PSSessionConfiguration and then use the -Path parameter with Set-
PSSessionConfiguration.
²²Microsoft. (2021, Sep. 27). Set-PSSessionConfiguration (Microsoft.PowerShell.Core). Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/set-pssessionconfiguration. [Accessed: Oct. 14, 2021].
Just Enough Administration 496
When you change the PowerShell session configuration, you’ll see a warning that the
WinRM service will restart. This will disconnect any existing PowerShell remoting
sessions. To prevent the service restart, use the -NoServiceRestart parameter. To
suppress any confirmation of the service restart, use the -Force parameter.
At the time of writing, the -NoServiceRestart parameter doesn’t guarantee that the
WinRM service won’t restart. As a rule of thumb, implicitly assume that the service will
restart.
18.5.1 Terms
• Security Descriptor Definition Language (SDDL): A string representation of a Security
Descriptor.
• Security Descriptor: A data structure of security information about a securable object,
including its owner, group, DACL, and SACL.
• Securable Object: Any object or resource, such as a file, process, or event, that can have a
Security Descriptor.
• Discretionary Access Control List (DACL): Identifies a list of trustees (access control
entries) that will be allowed or denied access to a securable object.
• System Access Control List (SACL): Identifies a list of trustees (access control entries) that
the system audits during a successful or failed access to a securable object.
• Trustee: A user account, group, or logon session to which an ACE applies.
Just Enough Administration 497
• Access Control List (ACL): A list of ACEs that together define the access rights of a
securable object in a DACL or SACL.
• Access Control Entry (ACE): A single entry that represents specific access rights of a
securable object for a trustee.
• Security Identifier (SID): An immutable identifier that represents a trustee.
Example 16: Displaying formatted permissions for the JeaEndpoint session configuration
1 Get-PSSessionConfiguration
Name : JeaEndpoint
PSVersion : 5.1
StartupScript :
RunAsUser :
Permission : CONTOSO\ServiceMaintenanceCapability AccessAllowed
However, reviewing the permission configuration in the WSMan PSDrive reveals the SDDL
format:
Example 17: Equivalent raw SDDL permissions for the JeaEndpoint session configuration
1 WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\
2 Plugin\JEAMaintenance\Resources\Resource_417209259\
3 Security\Security_5898658
4
5 Key: Uri
6 Value: http://schemas.microsoft.com/powershell/JEAMaintenance
7
8 Key: Sddl
9 Value: O:NSG:BAD:P(D;;GA;;;NU)
10 (A;;GA;;;S-1-5-21-1769934282-1541694284-2448955753-1106)
11 (A;;GA;;S-1-5-21-1769934282-1541694284-2448955753-1107)S:P(AU;FA;GA;;;WD)
12 (AU;SA;GXGW;;;WD)
13
14 Key: ExactMatch
15 Value: False
16
17 Key: xmlns
18 Value: http://schemas.microsoft.com/wbem/wsman/1/config/PluginConfiguration
19
20 Key: ParentResourceUri
21 Value: http://schemas.microsoft.com/powershell/JEAMaintenance
Just Enough Administration 498
O:NSG:BAD:P(D;;GA;;;NU)(A;;GA;;;S-1-5-21-1769934282-1541694284-2448955753-1106)
(A;;GA;;S-1-5-21-1769934282-1541694284-2448955753-1107)S:P(AU;FA;GA;;;WD)
(AU;SA;GXGW;;;WD)
# Owner (O:owner_sid)
O:NS
# DACL (D:dacl_flags(string_ace1)(string_ace2)...)
D:P(D;;GA;;;NU)(A;;GA;;;S-1-5-21-1769934282-1541694284-2448955753-1106)
(A;;GA;;S-1-5-21-1769934282-1541694284-2448955753-1107)
# SACL (S:sacl_flags(string_ace1)(string_ace2)...)
S:P(AU;FA;GA;;;WD)(AU;SA;GXGW;;;WD)
1. O: The object’s owner: The owner of the securable object, as a trustee SID or well-known
SID string constant. SID string constants include:
2. G: The object’s primary group. Primary groups are a legacy property retained for
backward compatibility, and PowerShell session configuration doesn’t use them.
3. D: The DACL. Defined permissions that determine the access to the resource, with the
following syntax: dacl_flag(string_ace1)(string_ace2)....
• dacl_flag: Inheritance Control Flags that apply to the DACL. Flags can be:
– “P” : Block inheritance from containers that are higher in the hierarchy.
– “AR” : Allow inheritance.
– “AI” : Child objects inherit permissions.
Just Enough Administration 499
4. S: The SACL. Audit permissions that determine auditing on the resource, using the
following syntax: sacl_flag(string_ace1)(string_ace2).... SACLs have the same
syntax and control flags as the dacl_flags.
For a complete list of trustee SID strings, see SID Strings²³. For a complete list of ACE
strings/flags, see ACE Strings²⁴.
• ACE Type: A value that defines the ACE type. Possible values are:
• ACE Flags: Controls the inheritance and auditing behavior.²⁵ Possible values are:
• Rights: Controls the standard, specific, and generic rights.²⁶ Possible values are:
You can combine multiple flags by appending them. For example, combining all the flags
from this table would produce GAGRRCSD.
• Account SID: The target SID.
• Object GUID: Not used within PowerShell session configuration.
• Inherit Object GUID: Not used within PowerShell session configuration.
• Resource Attribute: Not used within PowerShell session configuration.
²³https://learn.microsoft.com/en-us/windows/win32/secauthz/sid-strings
²⁴https://learn.microsoft.com/en-us/windows/win32/secauthz/ace-strings
²⁵Microsoft. (2021, Sep. 24). AceFlags Enum (System.Security.AccessControl). Microsoft Docs. [Online]. Available: https://learn
.microsoft.com/en-us/dotnet/api/system.security.accesscontrol.aceflags. [Accessed: Oct. 14, 2021].
²⁶Microsoft. (2021, Sep. 01). ACCESS_MASK (Winnt.h). Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-us/win-
dows/win32/secauthz/access-mask. [Accessed: Oct. 14, 2021].
Just Enough Administration 500
O:NSG:BAD:P(D;;GA;;;NU)(A;;GA;;;S-1-5-21-1769934282-1541694284-2448955753-1106)
(A;;GA;;;S-1-5-21-1769934282-1541694284-2448955753-1107)S:P(AU;FA;GA;;;WD)
(AU;SA;GXGW;;;WD)
To make adjustments to existing SDDLs, parse the SDDL string into [RawSecurityDescrip-
tor]:
1. Define the owner: Instantiate the [SecurityIdentifier] class with a Predefined Trustee
or a SID:
1 $Owner = [System.Security.Principal.SecurityIdentifier]::new("NS")
2. Define the primary group: Using the same class in the last step, create an object with the
predefined trustee or SID:
1 $PrimaryGroup = [System.Security.Principal.SecurityIdentifier]::new("BA")
3. Create the Discretionary ACL: Create an empty Discretionary ACL object (using
[RawAcl]):
1 $DiscretionaryACL = [System.Security.AccessControl.RawAcl]::new(0,0)
4. Create the ACEs for the DACL: Create an empty [ObjectAce] ACE object, with:
1. A qualifier: [AceQualifier]
2. A security identifier (a trustee): [SecurityIdentifier]
For example:
1 $EveroneAllowedACE = [System.Security.AccessControl.ObjectAce]::new(
2 [System.Security.AccessControl.AceFlags]::None,
3 # Define a Qualifier. In this case, we will allow Access
4 [System.Security.AccessControl.AceQualifier]::AccessAllowed,
5 1,
6 # Define a SecurityIdentifier:
7 # 'WD' is the shorthand version of Everyone
8 [System.Security.Principal.SecurityIdentifier]::new('WD'),
9 [System.Security.AccessControl.ObjectAceFlags]::None,
10 [System.Guid]::NewGuid(),
11 [System.Guid]::NewGuid(),
12 $false,
13 $null
14 )
1 $index = 0
2 # Add to the ACL
3 $DiscretionaryACL.InsertAce($index, $EveroneAllowedACE)
1 $SystemACL = [System.Security.AccessControl.RawAcl]::new(0,0)
7. Create the ACEs for the SACL: Create an empty [ObjectAce] ACE object, with:
1. A flag (AceFlags) defining the audit type.
2. A qualifier: [AceQualifier]. The example will use the SystemAudit enum.
3. A security identifier (a trustee): [SecurityIdentifier].
For example:
1 # Insert ACE into SystemACL
2 $EveroneAuditACE = [System.Security.AccessControl.ObjectAce]::new(
3 # Define a Flag
4 [System.Security.AccessControl.AceFlags]::SuccessfulAccess,
5 # Define a Qualifier
6 [System.Security.AccessControl.AceQualifier]::SystemAudit,
7 1,
8 # Define a Security Identifier
9 # 'WD' is the shorthand version of Everyone
10 [System.Security.Principal.SecurityIdentifier]::new('WD'),
11 [System.Security.AccessControl.ObjectAceFlags]::None,
12 [System.Guid]::NewGuid(),
13 [System.Guid]::NewGuid(),
14 $false,
15 $null
16 )
For a complete list of control flags, refer to the ControlFlags²⁷ .NET class documenta-
tion.
10. Create the security descriptor and export using the GetSddlForm() method.
Full example:
40
41 # Step 7: Create an ACE for SystemACL
42 $EveroneAuditACE = [System.Security.AccessControl.ObjectAce]::new(
43 [System.Security.AccessControl.AceFlags]::SuccessfulAccess,
44 [System.Security.AccessControl.AceQualifier]::SystemAudit,
45 1,
46 # 'WD' is an SID string constant for 'Everyone'
47 [System.Security.Principal.SecurityIdentifier]::new('WD'),
48 [System.Security.AccessControl.ObjectAceFlags]::None,
49 [System.Guid]::NewGuid(),
50 [System.Guid]::NewGuid(),
51 $false,
52 $null
53 )
54
55 # Step 8: Add ACE into SystemACL
56 $index = 0
57
58 # Add to the ACL
59 $SystemACL.InsertAce($index, $EveroneAuditACE)
60
61 # Step 9: Define the control flags for the security descriptor
62
63 # Within PowerShell remoting we will set the Protected flag ('P') for
64 # the SystemACL and DiscretionaryACL
65 $ControlFlags = [System.Security.AccessControl.ControlFlags]
66
67 $Flags = @(
68 $ControlFlags::DiscretionaryAclPresent
69 $ControlFlags::SystemAclPresent
70 $ControlFlags::DiscretionaryAclProtected
71 $ControlFlags::SystemAclProtected
72 $ControlFlags::SelfRelative
73 )
74
75 # Step 10: Create the security descriptor and
76 # convert the object to the SDDL format.
77 $SecurityDescriptor =
78 [System.Security.AccessControl.RawSecurityDescriptor]::new(
79 $Flags, $Owner, $PrimaryGroup, $SystemACL,$DiscretionaryACL
80 )
81 $SecurityDescriptor.GetSddlForm(
82 [System.Security.AccessControl.AccessControlSections]::All
83 )
O:NSG:BAD:P(OA;;CC;;;WD)S:P(OU;SA;CC;;;WD)
Example 21: Retrieving all available commands in the current PSRemoting session
1 $PSSessionCapabilityParams = @{
2 ConfigurationName = 'JEAMaintenance'
3 Username = 'CONTOSO\HelpdeskUser01'
4 }
5
6 Get-PSSessionCapability @PSSessionCapabilityParams
Active Directory user direct-assigned permissions may not reflect the capabilities available via
Get-PSSessionCapability. Always use Active Directory group permissions (preferably using
the RBAC Active Directory group model, using Active Directory role groups) when delegating
access in the PowerShell session or the role capability.
This solution assists helpdesks and system administrators in troubleshooting issues with user
access. Another use case is to provide reporting and ongoing security testing. In the example
below, a Pester test validates the effective access of several users:
Example 22: Validating effective access for a session configuration with Pester
1 $Params = @(
2 @{
3 # User1 should have no access to the server
4 ADUserName = 'CONTOSO\User1'
5 ExpectedCmdlets = @()
6 }
7 @{
8 # User2 should only have Get-ChildItem
9 ADUserName = 'CONTOSO\User2'
10 ExpectedCmdlets = @('Get-ChildItem')
11 }
12 @{
13 # User3 should only have Get-Service and Stop-Service
14 ADUserName = 'CONTOSO\User3'
15 ExpectedCmdlets = @('Get-Service', 'Stop-Service')
16 }
17 )
18
19 Describe "Test PowerShell remoting effective access on $ENV:Computer" {
20
21 It "Returns <ExpectedCmdlets> (<ADUserName> on <Computer>)"
22 -ForEach $Params -Test {
23
24 $params = @{
25 ConfigurationName = 'JEAMaintenance'
26 UserName = $ADUserName
27 }
28
29 $session = Get-PSSessionCapability @params |
30 Where-Object CommandType -eq 'Cmdlet'
31
32 # Test that the commands returns remain correct.
33 # No more, no less.
34
35 ($session | Sort-Object -Property Name).Name |
36 Should -Be ($ExpectedCmdlets | Sort-Object)
37
38 }
39
40 }
Enabling the ‘Log script block invocation start/stop events’ will generate a large number
of events.
Use Computer Configuration to enable logging on all machines within the domain.
You can consolidate PowerShell event logs and transcription logs by ingesting the contents into
Azure Sentinel. Azure Sentinel uses the Azure Log Analytics (OMS Agent). You can configure
event log and custom log ingestion.²⁸ ²⁹
Be aware that Azure Sentinel logging isn’t free. A large number of events can lead to a
large charge for storage space.
²⁸Microsoft. (2021, Jun. 09). Collect Windows event log data sources with Log Analytics agent. Microsoft Docs. [Online]. Available:
https://learn.microsoft.com/en-us/azure/azure-monitor/agents/data-sources-windows-events. [Accessed: Oct. 14, 2021].
²⁹Microsoft. (2021, Aug. 10). Overview of Azure Monitor agents. Microsoft Docs. [Online]. Available: https://learn.microsoft.com/en-
us/azure/azure-monitor/agents/agents-overview. [Accessed: Oct. 14, 2021].
Just Enough Administration 508
• SECURITY_DESCRIPTOR_CONTROL—Microsoft Docs⁴²
• Authentication for Remote Connections—Microsoft Docs⁴³
• WinRM Security—Microsoft Docs⁴⁴
• Kerberos Key Distribution Center—Microsoft Docs⁴⁵
• Credential Security Support Provider—Microsoft Docs⁴⁶
⁴²https://learn.microsoft.com/en-us/windows-hardware/drivers/ifs/security-descriptor-control
⁴³https://learn.microsoft.com/en-us/windows/win32/winrm/authentication-for-remote-connections
⁴⁴https://learn.microsoft.com/en-us/powershell/scripting/learn/remoting/winrmsecurity
⁴⁵https://learn.microsoft.com/en-us/windows/win32/secauthn/key-distribution-center
⁴⁶https://learn.microsoft.com/en-us/windows/win32/secauthn/credential-security-support-provider
Afterword
By Bill Kindle
This book was written during rather interesting times in the world, and with that came unique
challenges.
But here we are.
Through all the difficulties and roadblocks, a new PowerShell automation book was born.
Thank you for your support on behalf of the editorial and author teams.
You have just read through a collective work of contributors from the greater PowerShell
community. Unlike The PowerShell Conference Book vol. 1-3, we aimed to build something
different. Many months were poured into these pages to bring you, the reader, a PowerShell book
worthy of use in an academic setting.
You may wonder, “What was the reason for the cover art?”
Amy Zanatta designed the cover art and had this to say:
“Initially, I was approached by the Senior Editors to design a cover art image. I used
the PowerShell hero image for my color selection, with textures, font, and photography
added to provide a technical and professional look and feel.”
And she did a bang-up job at that!
So what challenges did the team face?
We had several authors drop from the project, with pressure on the senior editorial staff to fill in
the gaps. Michael Zanatta, Matt Corr, and Nicholas Bissell stepped up to write multiple chapters.
Thank you! However, considering the timeframe and lifespan of the project, we decided to drop
unfinished chapters to focus on releasing the book.
LeanPub continues to present challenges in how markdown is rendered. As many on the team can
attest, life sometimes gets in the way. Illness, family obligations, and job changes from authors
and editors created delays. However, the resiliency of the editorial and author teams shined.
What’s next? A well-earned break! A second edition will be considered later, adding/updating
content.
On a personal note, PowerShell was a game changer for my career. It helped me become a better
sysadmin and was a gateway for beginning to write. It has been amazing to see so many people
come together worldwide to work on a project and see that project become a reality. I liked the
first PowerShell Conference Book Volume 1 so much that I decided to help write a chapter for
Volume 2. I was then invited to become one of the editors for Volume 3. Michael Zanatta invited
me to join this project in the spirit of the first three volumes.
And here I am, standing on the shoulders of giants as an editor. I would have never thought back
in 2011, when I first started using PowerShell in my job, that I would be writing the afterword
for Modern IT Automation with PowerShell.
510
Afterword 511
SYMBOLS A (cont’d)
$_. See $PSItem Add-CATemplate 427
Add-Content 14, 22, 33, 162–163, 266–267
A Add-Type 216–217, 453–454
AAA approach 56–70 agile 38
Act 56, 63–64, 66–69 Amazon Web Services (AWS) 260, 276, 500–501,
503–504
Arrange 56, 63, 65–69 CloudFormation 276, 484
Assert 56, 63, 66–69 Ansible 193, 484
Access Control Entry (ACE) 479, 496–504, 508 AppendLine 122–123
Access Control List (ACL) 479, 496–499, 501–504 Approve-CertificateRequest 416
Active Directory Certificate Services (AD CS) approved verbs 161, 173
401–402, 414–415
Active Directory Domain Services (AD DS) design considerations 173
426–427
Active Directory Federation Services (AD FS) 401 $args
Add-AdCertificate 428 Assert-MockCalled 79
Add-AzVMNetworkInterface 284 Assert-VerifiableMock 79
Add-CAAuthorityInformationAccess 406, 409, attributes
422
Add-CACrlDistributionPoint 405–408, 421–422 [CmdletBinding()] 71, 73, 165–166, 320
512
Index 513
B, C
A (cont’d) A (cont’d)
attributes (cont’d) Azure DevOps (cont’d)
[ValidateCount()] 108, 167, 171 failTaskOnFailedTests 115–116
[ValidateLength()] 167, 170–172 mergeTestResults 115
[ValidateNotNull()] 167–168 RunPesterTests 112, 115
[ValidateNotNullOrEmpty()] 168 testResultsFiles 115
[ValidatePattern()] 319–320 testResultsFormat 115
[ValidateScript({})] 97, 135, 138, 168–169, 172,
319
[ValidateSet()] 58, 167, 169–170, 271 B
Authority Information Access (AIA) 403–409, base type 199, 236–237
419–423
AddToCertificateAia 408 big-endian 349
Azure 125, 275, 292–293 black box testing 125, 188
Azure Active Directory 428, 471 blueprint 277, 279, 291–292
Azure AD Connect 469 branch policy 40, 116
Azure Automation State Configuration build validation 116
483–484
Azure Bicep 276
Azure Cloud Shell 271–272, 278, 293 C
Azure DevOps 46, 54, 114–118, 124–125 case-sensitive operators 202–257
Azure Load Balancer 279, 281 Certificate Authority (CA) 388, 392, 401–430, 478
Azure Resource Manager (ARM) 275–276, 484 certificate containers 427–428
Azure SQL 278–279, 289 Certificate Revocation List (CRL) 388, 401–413,
419–423
Azure Storage Account 281, 289 certificate template 415, 424–427
Azure Subscription 278 CertificateDsc 428–429, 433
Azure VM 279 CertificateImport 428–429, 433
free trial 293 certificates
Resource Group 272, 280–281, 283 CACommonName 402, 414
Azure DevOps 46, 54, 114–118, 124–125 CAType 402, 414
failOnStderr 115 CryptoProviderName 402, 414
Index 514
C (cont’d) C (cont’d)
certificates (cont’d) collections
HashAlgorithmName 387, 402, 414 left-hand 156–157, 180, 234, 470
KeyLength 402–403, 414 many-to-one 235
private key 262, 387, 402 one-to-many 234
public key 386–387, 391–392, 400–402, 434 right-hand 156–157, 180, 235, 460
ValidityPeriod 402, 411, 422 Comma-Separated Values (CSV) 140, 181, 183–189,
197, 201, 266
ValidityPeriodUnits 402, 411, 423 comment-based help 42, 46, 48, 186
CertStoreLocation 391 commenting 183
certutil 410–412, 418, 422–423, 425 Common Language Infrastructure XML (CLIXML) 73,
95–96, 148, 188, 197–201, 475
change reviews 41 Compare-Object 162–163, 181
checksum. See hash sum Component Object Model (COM) 380, 449, 452
Clear-Host 505 concatenation 152
cloud computing 275 Configuration as Code (CaC) 275, 285–287, 289,
291–292
code regions 186 configuration drift 275, 285, 292
code reviews 38–54, 114, 186 configuration file 191, 263, 275–277, 438, 440–441, 476,
479, 486, 492, 500, 508
automated checks 41, 47 Connect-AzureAD 272
automated tests 114 Connect-CertificationAuthority 416
Communication 46–47 console-based security 447
face-to-face 47 Constrained Language Mode (CLM) 447–473
feedback 38–39, 43–44, 46–47 Continuous Integration/Continuous Delivery (CI/CD)
8, 47, 115, 118, 276–277
open-ended 43, 46 ConvertFrom-Csv 197–198
opinion 43–44 ConvertFrom-Json 191–192, 426
review comments 43–44 ConvertFrom-SddlString 500
reviewer 41–42, 45–47 ConvertFrom-Yaml 194
reviews 38–54 ConvertTo-Csv 197–198
code signing 386, 391–395, 400, 413, 424–434 ConvertTo-Json 191–192
CodeSigningCert 392, 394–395, 432 ConvertTo-SecureString 199, 280, 284
Index 515
D, E
C (cont’d) D (cont’d)
ConvertTo-Yaml 194 Desired State Configuration (DSC) (cont’d)
Copy-Item 181, 183–185, 187 Configuration Scripts (cont’d)
cost 56, 276, 333, 346, 370–372, 393, 401, 481 JEAMaintenance 487
credentials 260, 280, 284, 478 JeaRoleCapabilities 482
CRL Distribution Point (CDP) 388, 403–412, PowerShellRemoting 485
418–423
AddToCertificateCdp 407 Datum Module 193, 493
AddToCrlIdp 407 DSC Resource 429, 433, 482
AddToFreshestCrl 407 keywords
cryptographic 386–387, 401–403, 435 Configuration 286, 288
cryptography 360, 387, 394, 401–402, 409, 413, 415, Managed Object Format (MOF) 191, 287–288
428, 430, 434
culture 101–102, 158–159, 337–338, 353, 449 Pull Server 486
The DSC Book 289, 293
D digital certificate 387–388
data digital signature 386–387, 391, 397
deserialization 191–195, 197, 199–201, 475 Disable-PSSessionConfiguration 476, 508
serialization 103, 115, 191–195, 197–201, 475 disposability 277
delimiter 154, 183, 197–198, 311–315, 328 documented 184, 278, 431
Delimiter-Separated Values (DSV) 197
Desired State Configuration (DSC) 48–49, 52, 145, E
275–293, 428–429, 433, 474, 482–487, 493, 507–508
Configuration Scripts Enable-PSRemoting 484–485
Configure-IISServer 286–288, 290 Enable-PSSessionConfiguration 476
DisableDefaultPowerShellRemoting 508 Enter-PSSession 491, 495
Index 516
F, G
E (cont’d) F (cont’d)
Enums 216–218 Format-List 62, 417, 419–420, 428, 430, 494
bit flag 216–218 Format-Table 215, 324, 350, 366
environment variables 122, 456–457
errors G
non-terminating error 215, 265 Garbage In, Garbage Out (GIGO) 294, 296
terminating error 215, 265 GDPR (General Data Protection Regulation) 260
execution policy 263, 435–443, 445–446, 474 Get-ADComputer 178–179
Exit-PSSession 488, 505 Get-ADCSTemplate 426
Expand-Archive 162–165 Get-AdPkiContainer 428
ExplicitCapture 314, 331–332, 352, 354, 371 Get-ADPrincipalGroupMembership 179
Export-ADCSTemplate 425 Get-ADUser 179
Export-Certificate 432 Get-Alias 318
Export-Clixml 85, 93–95, 134–136, 147–148, Get-AuthenticodeSignature 397
199–201
Export-Csv 140, 197, 266–267, 269 Get-AzLoadBalancerBackendAddressPool 283
Export-Excel 141–142 Get-AzResourceGroup 272, 280–281, 283
Export-ModuleMember 84 Get-AzureADUser 272
eXtensible Markup Language (XML) 137, 174, Get-AzVirtualNetwork 283
194–196, 198, 201
Get-AzVirtualNetworkSubnetConfig 283
F Get-CAAuthorityInformationAccess 404, 406,
420–421
filtering left 147–148 Get-CACrlDistributionPoint 403, 405, 419–420
Find-Module 485 Get-Certificate 429
Find-RoleCapability 508 Get-CertificateRequest 415
Index 517
G (cont’d) G (cont’d)
Get-CertificateRevocationList 409, 412 Get-Random 88–89, 93, 95, 181, 183–187, 319
Get-CertificateTemplate 424 Get-Service 140, 266–268, 480, 482, 487, 492,
505–506
Get-CertificationAuthority 427 Git 2–37, 105, 107, 114, 116–119, 277
Get-ChildItem 4–19, 29, 138, 291, 391–395, 417, asterisk 13, 18
430–432
Get-CimInstance 147–148 branch 2, 4, 6–28, 31–35
Get-Command 57, 95, 126, 488, 505 develop 13, 15–16, 18, 20, 23, 32–33
Get-Content 15–19, 22–23, 29, 95, 134–135, feature 7–8, 21
141–142, 173–174, 191–192, 199, 267, 272, 286,
316–318
Get-Culture 101 HEAD 11–12, 14, 17, 20–23, 26, 28–29, 35
Get-Date 74, 93, 95, 158–161, 266–268, 273 hotfix 7–8
Get-DscConfiguration 289 main 2, 4, 6–14, 16–26, 28–29, 31–32, 34–35
Get-DscConfigurationStatus 288 release 7–8
Get-ExecutionPolicy 445 chain diagram 11, 16
Get-FileHash 162–163, 387 Cloning a remote repo 5
Get-FormatData 505 commit 2, 9–12, 15–28, 32–36
Get-Help 488, 505 commit hash 11–12, 28, 35
Get-History 268–269 commit message 24, 26, 28
Get-InstalledModule 107 default branch 2, 4
Get-IssuedRequest 417 display all branches 12
Get-Item 85–86, 95, 162–165, 191, 195, 392, 417, Fast-forward 18–19, 21, 32
430
Get-ItemPropertyValue 433 fork 2, 18
Get-Member 212, 489 .git 2, 4–6, 11, 24–25, 30–34, 37
Get-Module 83, 107 git add 9–10, 15, 20, 22–27, 36
Get-PendingRequest 416 git branch 6, 12, 17, 31, 114, 116–117, 119
Get-Process 147–148, 192, 197, 200, 238, 242, 255, git checkout 12–13, 16, 18, 23, 32–34
480, 482, 487, 505
Get-PSReadlineOption 269–270 git clean 27
Get-PSSessionCapability 476, 505–506 git clone 5–6, 29, 36
Get-PSSessionConfiguration 476, 493–495, 497, git commit 10–11, 15–16, 20, 22–24, 26, 33–34,
507–508 36
Index 518
G (cont’d) G (cont’d)
Git (cont’d) Git (cont’d)
git config 3–4, 10, 34 newassets.lib 14–19, 21, 23–24, 32, 34
Git global username 3–4 origin 2, 13, 18, 31–35
git init 4 orphaned 21, 26
git log 11, 21, 28 ours and theirs 24
git merge 18, 20, 22–23, 32 parent commit 21
git pull 32, 35 Pull Request 7, 12, 18, 32, 35
git push 31–33, 36 Pull Request Template 46, 54
git remote 31 pushing 31–33
Git repository 2, 4–6, 8, 10–11, 13, 29 README 30
git reset 20, 22, 25–27, 29, 34–35 remote repository 2, 13, 29, 31–32, 34, 36
git rm 10 repository (repo) 2–6, 8–14, 16–22, 24–25,
28–32, 34–37
git stash 25 root-commit 10
git status 8–13, 15–16, 23–24, 26–27, 34–35 soft reset 26, 34
Gitflow 7 source control 36
.gitignore 30 staged 10, 15, 25–26
hard reset 27–29, 35 starting from scratch 36
Head 25 trunk 8
Index 25 trunk-based 8
init.defaultbranch 4 upstream 31
Initialize 4 version control 2
latest commit 11, 17, 25 working branch 6
local branch 31, 33, 35 working directory 4–5, 15, 27
main branch 2, 7–8, 12, 16, 18–20, 22, 31–32, working tree 11–17, 19, 25–27, 29, 35
34–35
master branch 2, 4 GitHub 2–37, 118–125
merge commit 19–21, 24 all branches 12
merge conflict 22–23, 25 GitHub Actions 118–120, 122, 125
Index 519
H, I
G (cont’d) I (cont’d)
GitHub (cont’d) Import-DscResource 429, 433, 482, 485, 487, 508
GITHUB_STEP_SUMMARY 122–123 Import-Module 57, 84, 107, 126–127, 291, 409,
450–451
pulled 2, 32 Import-PSSession 491
pushed 30, 32, 40–41 Information Technology Infrastructure Library
(ITIL) 38, 40, 54
setting private repository 30 Infrastructure as Code (IaC) 275–293
Group Policy configuration scripts
Logging 261 Azure-Load-Balancer.psm1 279, 281
Group-Object 146–147 Azure-SQL-Server.psm1 279
Grouping, Sorting, and Filtering (GSF) 146–148, Azure-Storage-Account.psm1 279, 281
273, 305, 389
example 147 Azure-Virtual-Machine.psm1 279, 283,
290–291
Deploy-WebServer.ps1 279
H Two-Tier-App-Blueprint.ps1 279, 291
hash sum 387 declarative IaC 276
HKEY_CURRENT_USER (HKCU) 432, 440, 443 imperative IaC 275–276, 278
HKEY_LOCAL_MACHINE (HKLM) 410–411, InModuleScope 83, 85–86, 103
422–423, 433, 440–443
Install-AdcsCertificationAuthority 402, 414
I Install-Module 57, 107, 126, 193, 485
idempotency 276–277 -Force 107, 126
immutability 276–277, 379, 497 -SkipPublisherCheck 107, 126
Import-Certificate 394, 414, 427, 432 Install-WindowsFeature 402, 414, 426
Import-Clixml 73, 135, 138, 188–189, 199–201 integers
Import-Csv 181, 183–189, 197 signed 226–228
Index 520
J, K, L, M
I (cont’d) I (cont’d)
integers (cont’d) Invoke-ScriptAnalyzer 48–49
unsigned 226–227 Invoke-WebRequest 192, 241, 455
interpolation 145–146, 152–154, 161
composite formatting (-f) 153, 157, 159, 201 J
composite dates 159 JavaScript 191, 263, 336
formatting types (table) 157 JavaScript Object Notation (JSON) 191–194, 201,
263, 425–426
index order 155 Join-Path 83, 93, 95, 99–100, 162–163, 195, 395,
399, 406–409, 412, 416, 422, 427–432, 480
literal formatting 160 Just Enough Administration (JEA) 474–509
placeholder 154–155, 157
syntax 154 K
variable substitution 152–153 key storage provider 402–403, 414
Invoke-Command 178–179, 491–492, 508
Invoke-Expression 47, 451 L
Invoke-Pester 58–61, 64–67, 71–98, 100, 112–115, language modes 474, 490, 508
119–120, 126, 128–129, 131–132, 135–140, 142, 321
-Configuration 113–115, 119, 122, 137–138 Allowed Types 449
-ExcludeTagFilter 112 Constrained Language Mode (CLM) 385,
444–445, 447–473, 493
-Output 67, 75, 78, 82, 87, 92, 94, 96, 98, 100, ConstrainedLanguage 385, 444–445, 447–473,
129, 131–132, 136–137, 142, 321 493
-Output Detailed 64, 112–113 FullLanguage 447–448, 493
-Output Diagnostic 111 NoLanguage 447, 493
-OutputFile 115, 139 RestrictiveLanguage 447
-PassThru 142 logging 258–274
-Path 67, 112–113, 128–129, 131 logging configuration (non-Windows) 263
-Script 139 logging options 261–263, 265–266
-Tag Unit 63, 66, 68–69, 110, 114–115, 119, 122
-TagFilter M
Unit 113 malicious code 386, 390, 447, 456
Invoke-RestMethod 58, 71–72, 88–92, 192, math
380–381
Index 521
M (cont’d) M (cont’d)
math (cont’d) methods (cont’d)
base 10 219, 226–228 NextMatch() 324
base 2 219, 224–228 ReadScriptContents() 456
binary 111, 216, 219, 224–230, 251, 315, 382, Replace() 134, 238, 325–327
425, 431, 433, 502
LSB 219, 228 Split() 326–327
MSB 219 Synchronized() 379–380
decimal 158, 219, 223–226, 299, 329, 337, 349, ToString() 123, 135, 138, 160–161, 212
375, 377, 449
Measure-Command 370, 374 Composite Formatting 160
Measure-Object 488, 505 Unescape() 327–329
methods Where() 147
Escape() 239, 327–329 MITA Extras 5–6, 36–37, 59, 63, 65, 68, 70, 72, 75,
79, 82, 87, 92, 94, 96, 98, 100, 103, 279, 291, 382
ForEach() 249 multiline string. See strings - here-string
GetAppLockerPolicy() 457
GetDebugLockdownPolicy() 457 N
GetGroupNames() 329 nested conditions 175–176
GetGroupNumbers() 329 nested statement 175, 178–180, 189, 253
GetLockdownPolicy() 456–457 network
GetSystemLockdownPolicy() 452, 456 IP address 291, 329–330, 347–348, 378, 478
GetType() 192, 194, 199–200, 213–214, 233, 455 New Technology File System (NTFS) 334, 439
GetWldpPolicy() 457 New-ADCSTemplate 426
GroupNameFromNumber() 330 New-AzAvailabilitySet 284
GroupNumberFromName() 330 New-AzLoadBalancer 282
IndexOf() 234 New-
AzLoadBalancerBackendAddressPoolConfig
282
IsMatch() 322–323 New-AzLoadBalancerFrontendIpConfig 282
Match() 323 New-AzLoadBalancerProbeConfig 282
Matches() 296, 324–326 New-AzLoadBalancerRuleConfig 282
.NET Methods 306, 310, 317, 321, 325, 370 New-AzNetworkInterface 284
Index 522
N (cont’d) O
New-AzPublicIpAddress 282 objects
New-AzResourceGroup 280–281, 283 constructor 103, 322, 369
New-AzSqlDatabase 280 Office 365
New-AzSqlServer 280 EMS 469
New-AzSqlServerFirewallRule 280 Enterprise Mobility + Security 469
New-AzStorageAccount 281 Intune 261, 429, 434, 469–470, 472, 484
New-AzVM 285 Device Policy 470
New-AzVMConfig 284 Microsoft Endpoint Manager Admin Center
(MEMAC) 470–471
New-ConditionalText 141–142 Policy Auditing 466, 469
New-EventLog 271 Mobile Device Management 469
New-Item 4, 14, 127, 136, 181, 183–185, 187, 480 operator precedence 202, 250, 252–257
New-MockObject 101–103 hashtable literal syntax 253
New-Module 83 higher precedence 252, 256
New-ModuleManifest 480 lower precedence 252, 256
New-Object 280, 284, 322, 452–453, 456 operator
New-PesterConfiguration 114–115, 119, 122, 138 assignment 202, 242–243, 250–251, 253,
255–257
New-PesterContainer 136–137, 142 negation 250, 253
New-PSRoleCapabilityFile 480, 508 precedence 80–82, 145, 202, 250–257, 439, 442
New-PSSession 485, 491 precedence group 251
New-PSSessionConfigurationFile 476, 481–482, operators
486, 508
New-ScheduledTask 455 -as 215–257
New-SelfSignedCertificate 391 bitwise operators 216–230
Index 523
O (cont’d) O (cont’d)
operators (cont’d) operators (cont’d)
bitwise operators (cont’d) -notcontains 238
-band 216, 223, 229–230 -notin 234, 238, 251
-bnot 216, 226 -notlike 230, 233, 251
-bor 216, 224, 229 null-coalescing 240–243, 252
-bxor 216, 225 null-coalescing assignment 242–243, 252
-shl 216, 228 null-conditional 243–246, 251
-shr 216, 228 null-conditional operator 244–245
case-insensitive operator 202–207, 230–232, 309, 314, operator precedence 250–257
319, 333, 338, 342, 352
case-sensitive operator 202–203, 206, 311, 315, 319, powershell operators 202, 308, 322
322, 325, 360
-contains 234–238, 251 regex
-creplace 310–311 -cmatch 203, 309, 319–320
-csplit 203, 314–315 -cnotmatch 203, 309, 319
-in 235, 237, 251 -imatch 309, 328, 364
-ireplace 311, 380 -inotmatch 309, 319
-is 213–214 -notmatch 234, 251
-isnot 213–214 -replace 238–239, 251
-isplit 315 -split 251
-join 251, 255–256 string
-like 230–234, 251 -f 153–154, 161
wildcards 230 subexpression 242–244, 251, 253
looping ternary operator 239–240, 252
break 247–249 typecasting 215–216
continue 248 -as 213, 215–216, 251, 255
labels 247–249 -is 213–214, 250–251
:looplabel 247–249 -isnot 213–214, 250–251
-match 230, 234, 251 Out-Default 488, 505
Index 524
O (cont’d) P (cont’d)
Out-File 123, 266, 272, 316, 321 Pester (cont’d)
Mock 78, 97
P New-MockObject 101–103
parameters New-PesterConfiguration 114–115, 119, 122,
138
param() 58–61, 71, 73–74, 102, 108, 135, 138, New-PesterContainer 136–137, 142
141–142, 150, 162–174, 191, 271, 279, 281, 283, 286,
291, 320, 326, 332, 362, 490
Pester 45–47, 54–144, 278, 320–321, 505–506 ParameterFilter 65–66, 87–88, 91, 93, 99
AfterAll 107, 134 Pester Container 135–137, 142
AfterEach 134–135 $PesterBoundParameters 83, 89
Assert-MockCalled 79 run tests 7, 65, 111–112, 375
Assert-VerifiableMock 79 Should 63–64, 66–69, 74–76, 80–81, 84–86, 90,
95–96, 100, 109–110, 128–135, 137–138, 141, 143,
320–321
BeforeAll 63, 65, 68, 74–75, 79–82, 84–85, -Exactly 76–78, 85–86, 91–94, 100, 439, 442
88–89, 93–95, 97, 99, 109, 128–130, 132–135, 137,
140
BeforeDiscovery 79, 83, 133, 140–143 -ExclusiveFilter 93–94, 100
BeforeEach 74, 79–82, 90, 134–135 -Invoke 75–79, 86, 91–94, 100
Configuration object 113–114 -InvokeVerifiable 78–79, 86, 91
Context 76–78, 81, 85, 91–92, 94–95, 111, 131, -Match 135, 137, 140–141, 143, 321
143
Debug Tests 111 -MatchExactly 321
Describe 63, 66, 68–69, 74, 76–81, 84–86, 88, -Not 69, 76, 80–81, 95, 110, 131–132, 134–135,
90–91, 93, 95, 97–100, 109–110, 112, 128–135, 138, 321
140–141, 143, 321, 506
-Tag 63, 66, 68–69, 110, 112 -Times 76, 93, 396
Invoke Tests 76, 78 -Verifiable 65–66, 78–79, 86, 88–89, 91
Invoke-Pester 64, 67, 75, 78, 82, 87, 92, 94, 96, Tag 63, 66–69, 110, 112–115, 119, 122
98, 100, 112–115, 119–120, 122, 126, 128–129,
131–132, 135–140, 142, 321
It 74, 77–81, 90, 94, 110–111, 128–130, 452 Tags 67, 109, 363
-TestCases 68–69, 74, 81, 109–110, 129 test drive (TestDrive:) 84–86, 93, 95, 99–100
Index 525
P (cont’d) P (cont’d)
Pester (cont’d) PowerShell Core 127, 130–131, 133, 263, 443, 474
Verifiable 65–66, 78–79, 86, 88–89, 91 PowerShell Gallery 48, 54, 57, 70, 107, 126, 193,
260, 274, 409, 425, 482, 485–486
version PowerShell Remoting 198, 474–475, 477–478,
483–484, 491–492, 496–497, 504–507
3.0 94, 126, 144 auditing 504
4.0 74, 126, 138–139, 144 effective rights 505
5.0 74, 79, 82, 96, 113, 505 Event Log 172, 261, 263, 326, 506
5.3 74 transcription logs 504, 507
pipelines 36, 47, 50, 52, 114–116, 125, 128, Authentication
146–148, 167, 176, 182, 252, 265–266, 276, 315–316
plain text 260, 265, 295, 314, 317 Basic 477
PowerShell Cert Authentication 478
best practices 38, 54, 79, 191, 201, 258, 294–295, Credential Security Support Provider
370–384, 388, 403, 410 (CredSSP) 478–479, 485
conventions 38–39, 42 Default 477
dot source (.) 128, 251, 287, 396, 452 Kerberos 424, 477
module Negotiate (Windows Integrated
Authentication - IWA) 477
PowerShell Protect 456 Just Enough Administration DSC (JeaDSC)
Module 482, 491
PSPKI 409, 415, 418, 424, 427 Just Enough Administration DSC (JeaDSC)
Module Properties
PSScriptAnalyzer 46–50, 54 AliasDefinitions 481, 490
PSScriptAnalyzer Rule Sets 48 AssembliesToLoad 481, 491
noun-verb 39 EnvironmentVariables 481, 491
.psd1 48–51, 396, 451, 457, 459, 479–480, 484 FormatsToProcess 481, 491
.psm1 50, 279, 281, 283, 290–291, 396, 450–451, FunctionDefinitions 481, 490
457, 459, 479–481
.pssc 475, 482, 495–496 GroupManagedServiceAccount 488
version HungRegistrationTimeout 491
PowerShell 2 127–131, 133, 200–201, 472–473 ModulesToImport 481, 489
PowerShell 3 127, 129–131, 133, 447 MountUserDrive 489
PowerShell 5.1 126–127, 130–131, 133, RequiredGroups 489
136–137, 245, 258, 486
PowerShell 7 136–137, 190, 239–240, RoleDefinitions 479, 481–482, 486–488, 492
242–243, 258, 313, 440–443, 451
Index 526
P (cont’d) P (cont’d)
PowerShell Remoting (cont’d) Principle of Least Privilege (PoLP) 474
Just Enough Administration DSC (JeaDSC) $PSBoundParameters 83, 150
Module Properties (cont’d)
RunAsVirtualAccount 487–488, 492, 494 [PSCustomObject] 95–96, 193, 200, 256, 266,
327–328
RunAsVirtualAccountGroups 488 from hash table 61, 65–66, 73–74, 85, 89,
152–153, 174, 180, 192, 194, 197–199, 212, 244–245,
255, 267, 348, 378, 381, 450
ScriptsToProcess 481, 488 $PSItem 131–132, 141–143, 207, 315
SessionType 488–489 as $_ 59–61, 72, 108, 131, 140–142, 147–148,
168–169, 172–173, 179, 181, 183–185, 187, 207, 212,
270, 306, 315, 318, 326, 332, 353–354, 361, 365, 367,
375, 377, 380
TranscriptDirectory 487–488 [PSObject] 72, 74, 192, 199–200, 449–451, 500
TypesToProcess 481, 491 PSScriptAnalyzer 45–51, 54, 114
UserDriveMaximumSize 489 $PSVersionTable 134–138
VariableDefinitions 481, 490–491 Public Key Infrastructure (PKI) 386, 391–393,
400–402, 404, 407, 416, 424, 427, 429, 434
VisibleAliases 481, 489 Publish-AzVMDscConfiguration 290
VisibleCmdlets 480–482, 487, 489 Puppet 484
VisibleExternalCommands 481, 490
VisibleFunctions 481, 489–490 R
VisibleProviders 481, 490 Receive-Certificate 417
PowerShell Role Capabilities 481, 483, 486, 488, refactoring 146–201
492–493, 497
Role Capabilities Configuration (Core) advanced function parameters
custom 481 execution order 169
RoleCapabilities 481 parameter sets 166
RoleCapabilityFiles 479–483 parameter sets attributes 167
Role Definition design considerations 492 hash tables
Session Configuration 447, 474, 476, 479, modification 148
481–483, 486–488, 491–500, 502, 505–508
Policies 474 logic flow 188–189, 191
PowerShell security 385 techniques 189
PowerShell Workflow 451 waterfall design 189–190
XAML 451 nested loops 177–178, 180
Predictive IntelliSense 270 using cmdlet parameters 178
Index 527
R (cont’d) R (cont’d)
refactoring (cont’d) regex (cont’d)
nested loops (cont’d) capturing group (cont’d)
using Compare-Object 181 captures 295–296, 305–306, 308, 312–314,
323–324, 329, 332, 336–337, 352–360, 362–363,
365–368, 370–371, 373–375, 377, 380, 383
using Where-Object 179 character class 296, 298–300, 304, 312, 334, 336,
340–341, 351–352, 374
output type 161, 173 character classes 298–300
parameter typecasting deterministic finite automaton (DFA) 341–342
example 164 differences between flavors 296
singular task 162 escape sequence 329
example 161–162 escape sequences 303–304
splatting greedy 302–303, 335–336, 343, 346, 375
complex example 151 group initializers 334
example 150 inline options 297, 309, 316, 339, 354
regex 295–384 invalid pattern 328, 340, 360
alternation 298, 312, 341–342, 346, 364 limitations 295–296
anchors 299, 304–305, 331, 340–341, 343, 348, lookahead 297, 310, 336, 348, 360–362, 364, 366
360, 378–379
beginning of string 310, 331, 336, 347, 353, lookaround 297, 305, 310, 336, 346, 348,
367, 374 360–362, 369, 378
end of string 157, 335–336, 341, 347–348, 367 lookbehind 297, 336, 346, 361–362, 373–374, 376
word boundary 172–173, 304–305, 310, MatchTimeout 322, 370
320–321, 325–326, 328, 348, 370, 373–374, 399, 430
zero-width assertion 304–305 Max-Substrings 313
atomic group 296–297, 345–346, 362, 369, 373 metacharacter 299, 309, 340, 355
backreferences 296–297, 309, 329, 337, 346, negative lookahead 297, 310, 348, 360–361, 366
355–356, 358–360
backtracking 296, 303–304, 342–346, 369–370, .NET Regex class 322, 371
373, 375–376, 383–384
catastrophic backtracking 343–345, 369–370, CacheSize 333
373
balancing groups 296–297, 346, 365–366, 368, constructor 322–323, 326–327, 329, 334,
376 353–354, 362
captures 305–307 Escape() 328
captures visualized 306–307 IsMatch() 323, 331, 333, 343, 345
capturing group 305, 310, 312, 314, 323, 329, Match() 324, 332, 335, 337, 348, 365–367, 378,
331, 334, 340–341, 347, 355, 357, 359–365, 370 380
Index 528
R (cont’d) R (cont’d)
regex (cont’d) regex (cont’d)
.NET Regex class (cont’d) [RegexOptions] (cont’d)
Matches() 296, 306, 325, 361, 370, 374–375, None 330
377, 380
[regex] type accelerator 321, 379 RightToLeft 335
Replace() 325 Singleline 333, 352
Split() 327–328 replacement patterns 309, 355–356, 358
Unescape() 329 sentence terminator 374–376
.NET regex vs. others 296–297, 336–338, 341, sequential 302
344–345, 359
non-capturing groups 306, 331, 347, 363, 368, substring 295, 305, 311–314, 370
371
nondeterministic finite automaton (NFA) wildcard patterns 296
341–342
option modifier 352–353, 361 word boundary 304–305, 310, 348, 373
option span 297, 354 Register-PSSessionConfiguration 476, 486, 495
positive lookahead 297 Register-ScheduledTask 455
quantifiers 296, 300–303, 311–312, 334–335, Remove-CAAuthorityInformationAccess 406,
340–341, 343–347, 364, 368–369 421
shorthand quantifiers 301 Remove-Item 134–135, 432–433
recursion 296, 346–347, 365, 368 repeatability 68, 277
Regex Denial of Service (ReDoS) 369–370 Requires 58, 147, 194, 485
regex engine 295–296, 298, 302, 310, 312, 315, Restart-Service 411, 423, 480, 482, 487, 505
333, 340–345, 349–350, 355, 357, 369, 371, 377
[RegexOptions] 314, 320, 322, 330–339, 343, Role-Based Access Control (RBAC) 492, 505
345, 347, 352
combining 338 root certificate (RootCert) 393–394, 427, 429
Compiled 332–333, 371 runspaces 379–380, 447, 474
CultureInvariant 337
ECMAScript 336 S
ExplicitCapture 314, 331–332, 352, 354, 371 scalability 275–276, 481
IgnoreCase 320, 322, 330–331, 338 scalar 132, 191, 193, 233, 349–350
IgnorePatternWhitespace 334, 347, 352, script block 78, 83, 88, 101, 126, 137, 148, 168–169,
354–355, 375 172, 178–180, 207, 247, 261–263, 310, 315, 318,
326–327, 485, 490, 492–493, 506–508
Multiline 320, 322, 331, 338 script execution policies
Index 529
S (cont’d) S (cont’d)
script execution policies (cont’d) Security Descriptor Definition Language (SDDL)
(cont’d)
AllSigned 435–436, 442 definition (cont’d)
Bypass 435, 438, 441, 446 Securable Object 496
Default 435, 443 Security Descriptor 496
RemoteSigned 435–439 Security Identifier (SID) 497
Restricted 435, 437–439 System Access Control List (SACL) 496
Scope Trustee 496
CurrentUser 439–440 read SDDL 500
LocalMachine 439, 441 Security Identifier (SID) 497–499, 501, 503–504
MachinePolicy 439–442, 444 syntax 498
Process 438–440, 442 update SDDL 500
UserPolicy 439–440, 442 Select-Object 59–61, 108, 140–142, 146–148, 192,
194, 197, 200, 270, 272, 289, 324, 328, 392, 397, 430,
488, 494, 505
Undefined 435, 438–439 Select-String 316–318, 383
Unrestricted 435, 438–439 Select-Xml 195–196
script execution policy 435–446 self-signed 388, 391–394, 465
script signing 386–434 Send-MailMessage 162, 164, 182–185, 187
scrum 38 sensitive data 260
Secure Shell (SSH) 26, 77, 92, 285, 404, 447, 475 session state 479
Security Descriptor Definition Language (SDDL) Set-ADObject 426
495–498, 500–501, 503–504
Access Flags Set-AuthenticodeSignature 390, 395–396
AccessAllowed 494, 497, 500–501, 503 Set-AzVMDscExtension 290–291
AccessDenied 494, 500 Set-AzVMOperatingSystem 284
create SDDL 501 Set-AzVMSourceImage 284
definition Set-Content 9, 20, 99, 191–192, 266–267, 286, 317,
395, 495
Access Control Entry (ACE) 497 Set-ExecutionPolicy 438–442, 445
Access Control List (ACL) 497 Set-GPRegistryValue 431, 433, 442
Discretionary Access Control List (DACL) Set-ItemProperty 93–95
496–498, 501–502
Index 530
S (cont’d) S (cont’d)
Set-Location 4–5, 9, 127 strings
Set-PSRepository 57 here-string 303–304, 331
Set-PSSessionConfiguration 476, 495–496 expandable 216–217, 303, 454, 486–487,
490–491
Set-RuleOption 445 literal 194–195, 197–198, 304, 316, 321,
327–328, 331, 333–334, 353, 355–356, 358, 360–361,
367, 377, 425–426, 488
Set-StrictMode 245–246 string array 142–143, 171, 174, 213–214, 286,
308, 310, 315, 322, 488–491
Should Submit-CertificateRequest 416
-Be 63, 66–69, 74, 80–81, 85–86, 90, 95–96, 100, subordinate Certificate Authority 388, 402, 410,
109–110, 128–135, 141, 143 419
-BeLike 63, 66, 68 substitution patterns 355
-BeNullOrEmpty 69, 74, 85–86, 110, 131–132, subtraction 108, 350–352
134–135
-HaveCount 63, 67, 69 surrogate 349–350
-Not 69, 80–81, 95, 110, 131–135, 321 switch statement 204–212
sigcheck 397–399 -CaseSenitive 206
signed DLLs 453 control statements 210–211
signtool 397, 399 default 208, 210
smoke testing 278 default statement 208
Software Development Lifecycle (SDLC) 38 -Exact 206
Sort-Object 146–147, 506 expression matching 207
splatting 113, 139, 146, 148–151, 189 lists and arrays 209–210
Split-Path 83, 95, 142, 263 object types 212
Start-DscConfiguration 288 parameters 204
Start-Process 292, 318 -Regex 205
Start-Service 419, 460 -Wildcard 205–207
Start-Sleep 318 system-level 258, 260–261, 265
Start-Transcript 265, 268–269, 273 System.Management.Automation 95, 200, 314,
450–451
Stop-Process 252, 480, 482, 487, 505
Stop-Transcript 265 T
strict mode 245–246 Tee-Object 267–268
Index 531
T (cont’d) T (cont’d)
Terraform 276, 484 testing (cont’d)
Test-Path 135, 138–139, 168–169, 172–173, mocking (cont’d)
188–189
Test-PSSessionConfigurationFile 476 Mock Assertion Tests 78
Test-WSMan 485 Mock Scoping 75, 80, 82
testing 56–143 Mocking Invoke-RestMethod 88
AAA approach 56–70, 106, 109, 111 native application 99–100
assertions .NET objects 101
exclusive filter 93 overriding a mock 81
filtered mock assertions 90–92 ParameterFilter Filter Script 87–89, 91
child scope 80–82 real dependencies 71–72, 88–89, 94–95, 97,
100
current scope 76, 81, 93 removing typecasting and validation 97
debugging Restricting Mock Calls 91, 93, 100
diagnostic output 111 Use Cases 71
discovery and run 79, 82, 133 module scope 85–87
execution order 81–82 parameterized
fakes 72–75, 88–90 It descriptor templates 131
definition 72 It descriptor templates, dot-navigation 132
inner context 77, 80, 82 Pester Configuration 137
integration testing 278 using -ForEach 130
mock 73 using a param block 135
creating stub 74 using BeforeDiscovery 79, 83, 133, 140–143
definition 73 using Pester Containers 136
syntax 73 parameterized testing 126–143
mock testing 71, 75, 86 seam 73
differences from mocking 71–72 definition 73
mocking 65, 67, 71–105, 109, 111 smoke testing 278
Dynamic Mock Behavior 87 store object 73–75, 89
Index 532
U, V, W
T (cont’d) U (cont’d)
testing (cont’d) Unicode (cont’d)
stub 72–74 Latin-1 Supplement 349
definition 72 Unregister-PSSessionConfiguration 476
unit testing 67, 71–72, 88–92, 95–96, 105–125 $using: 286–287, 380–381
testing and monitoring 278 UTF-16 349–350
testing frameworks UTF-8 195
MSTest 106
nUnit 106, 115–116, 137, 139 V
NUnitXml 137, 139 VBScript 152, 449
Specflow 106 Virtual Machine (VM) 278, 284–285, 289–290,
292, 401, 405
Xunit 106 Visual Studio Code 45, 51–54, 64, 111, 125, 270
text PowerShell Extension 45, 51–54
left-to-right 250, 252–256, 336, 355, 358 settings.json 51–53
right-to-left 251, 254, 335–336 .vscode 51
time stamp server 395–396 workspace settings 51
trusted publisher 393–394, 399–400, 434–436, workspace-level 51
444–445
type conversion 215, 454–455
W
U Wait-Job 179
Unblock-File 437 web application 277
Unicode 124, 296–298, 312, 328, 334, 336, 348–351, Where-Object 59–61, 147–148, 172, 179–181,
377, 380 183–185, 187, 207, 266–270, 318–319, 391, 406, 421,
506, 508
Basic Multilingual Plane (BMP) 349 white box testing 106, 125
Index 533
W (cont’d) W (cont’d)
Win32 executables 449 Windows Remote Management (WinRM) 475,
477–479, 485, 487, 495–496, 509
Windows Network Ports 475
__PSLockdownPolicy 456–458 Simple Object Access Protocol
Antimalware Scan Interface (AMSI) 456 (SOAP) 79, 475, 477
Configurable Code Integrity (CGI) 468 Web Services-Management
Device Guard (DG) 447, 456, 468, 473 (WS-Management or WSMan) 475–476, 485,
493, 495, 497, 508
Event Log 260–263, 270–271, 274, 466, 504, 507 Windows Server
Trusted Boot 447 Active Directory 179, 260, 401–402, 405,
414–415, 421, 427, 477, 492, 505
User-mode Code Integrity Group Policy 261–262, 431, 439–445, 458–460,
469, 484, 506
(UMCI) 445, 447, 457 Internet Information Services (IIS) 285–286,
289, 292
Windows AppLocker 441, 444–447, 449, 452, Microsoft Endpoint Configuration Manager
456–469, 472–473 (MECM) 469
Auditing 259, 459, 466, 469, 476, 499, 504 System Center Configuration Manager
(SCCM) 484
definition type 458–459 Write-Debug 265
DLL rules 459 Write-Error 265, 328, 381
Executable rules 458 Write-EventLog 271
Packaged app rules 459 Write-Host 43–44, 48, 252, 264–265
Policy Enforcement 466, 469 Write-Information 265
Script rules 441, 444–445, 452, 459–465, Write-Output 58–61, 65–66, 147, 188, 264, 332
467–468, 473
Windows Installer rules 459 Write-Progress 265
Windows Defender 441, 444–445, 447, 468 Write-Verbose 264, 271, 380
Windows Defender Application Control Write-Warning 265
(WDAC) 441, 444–447, 449, 452, 457, 468–469,
472–473
Windows Desktop Y
Windows 10 468 Yet Another Markup Language (YAML) 115–116,
118–119, 125, 193–194, 201, 493
Windows 10 1903 100, 126, 262, 447, 468–470
Windows Lockdown Policy (WLDP) 456–458,
469, 472
Windows Management Instrumentation (WMI)
288