0% found this document useful (0 votes)
40 views208 pages

Saltstackfordevops

This document is a book about SaltStack and how it can be used for DevOps. SaltStack is an open source tool for infrastructure automation and configuration management. The book provides an introduction to SaltStack and discusses how to get started, install SaltStack, configure SaltStack, and understand basic concepts like the master, minions, pillars, states, grains and more. It is intended to teach readers how to use SaltStack for infrastructure automation and DevOps practices.

Uploaded by

lemos_pab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views208 pages

Saltstackfordevops

This document is a book about SaltStack and how it can be used for DevOps. SaltStack is an open source tool for infrastructure automation and configuration management. The book provides an introduction to SaltStack and discusses how to get started, install SaltStack, configure SaltStack, and understand basic concepts like the master, minions, pillars, states, grains and more. It is intended to teach readers how to use SaltStack for infrastructure automation and DevOps practices.

Uploaded by

lemos_pab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 208

Saltstack For DevOps

Extremely fast and simple IT automation and


configuration management

Aymen El Amri
This book is for sale at http://leanpub.com/saltstackfordevops

This version was published on 2017-09-25

This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and
many iterations to get reader feedback, pivot until you have the right book and build traction once
you do.

© 2015 - 2017 Aymen El Amri


Tweet This Book!
Please help Aymen El Amri by spreading the word about this book on Twitter!
The suggested tweet for this book is:
I just bought SaltStack For DevOps book. Check it out : https://leanpub.com/saltstackfordevops cc
@Salt4DevOps
The suggested hashtag for this book is #Salt4DevOps.
Find out what other people are saying about the book by clicking on this link to search for this
hashtag on Twitter:
#Salt4DevOps
Also By Aymen El Amri
The Jumpstart Up
Painless Docker
Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Every Book Has A Story, This Story Has A Book . . . . . . . . . . . . . . . . . . . . . . 2
To Whom Is This Book Addressed ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Conventions Used In This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
How To Properly Enjoy This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
How To Contribute To This Book ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
About The Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Configuration Management And Data Center Automation . . . . . . . . . . . . . . . . . 6
DevOps Tooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

 Getting Started With SaltStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11


Presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
A brief summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Expanding Salt Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Advanced Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Installation For Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Practical Installation Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Configuration And Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26


Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Configuration Of A Masterless Minion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Configuration Of salt-master . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Configuration Of salt-minion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Post Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
General Troubleshooting And Prerequisites Checklist . . . . . . . . . . . . . . . . . . . . 30
Troubleshooting salt-master . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
CONTENTS

Troubleshooting salt-minion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Troubleshooting Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Using salt-call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
A Note About Redhat Enterprise Linux 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
YAML : The Human Readable Serialization Language . . . . . . . . . . . . . . . . . . . . 35
Jinja: A Templating Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
The Master: salt-master . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
The Minion: salt-minion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Expanding Salt with salt-syndic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Key Management with salt-key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Salt Runners Interface: salt-run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
The Data Aspect of Salt: Grains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Targeting grains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Extending The Minion Features With saltutil . . . . . . . . . . . . . . . . . . . . . . . . 60
Formating Outputs with outputter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Describing Configurations and States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
The Top File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Using Master’s Pillars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Remote Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Event-Based Execution With Reactors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
From States To Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
SaltStack And Vagrant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
SaltStack And Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Salt Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

Real World Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88


Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Vagrant Quick-Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Installation And Configuration Of Apache Web Server . . . . . . . . . . . . . . . . . . . 91
Creating You Own Private Storage Cloud (SaltStack + Vagrant + OwnCloud) . . . . . . . 97
Scheduling Monitoring Tasks Using SaltStack . . . . . . . . . . . . . . . . . . . . . . . . 100
Automating Wordpress / LAMP Installation And Configuration . . . . . . . . . . . . . . 104
Docker Quick-Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Getting Docker Container System Information Using SaltStack . . . . . . . . . . . . . . . 122
Monitoring Docker Containers Using SaltStack . . . . . . . . . . . . . . . . . . . . . . . 130
Provisioning Docker Containers With SaltStack . . . . . . . . . . . . . . . . . . . . . . . 133
Using Salt Cloud With Linode Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Using Salt Cloud With Amazon Web Services (AWS) . . . . . . . . . . . . . . . . . . . . 163

SaltStack Cheat Sheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173


CONTENTS

Installing SaltStack - Ubuntu 14.* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173


Bootstrapping Salt Minion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Salt Key Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
SaltStack Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
SaltStack Modules And Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Compound Matchers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Upgrades & Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Packages Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Reboot & Uptime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Using Grains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Syncing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Running System Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Working With Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Network Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Working With HTTP Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Job Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Scheduling Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Working With SLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Testing States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Load testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
State Declaration Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
SaltStack Github Repositories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183


Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Organizing Grains: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Your Own Execution Modules vs Working With Jinja . . . . . . . . . . . . . . . . . . . . 185
Using The Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Following Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Automate Your Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Start Your Cheat Sheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
The Usual Lecture From The Local System Administrator . . . . . . . . . . . . . . . . . 193
Updates, Upgrades And Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Following Official Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

Community And Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196


SaltStack Source Code On Github . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Official Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Community Ressources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Meet-ups Around SaltStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
SaltStackInc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
CONTENTS

Afterword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
CONTENTS 1

Dedication
I dedicate this to my mother and father who were always there for me.
Preface

Every Book Has A Story, This Story Has A Book


I wanted to quickly resign from my job, my suggestions about setting up a continuous delivery and
deployment pipeline have not been considered! Like many of system administrators, I’m lazy when
it comes to repeating tasks manually but passionate when I should automate things.
I love automation and in my job there are hundreds of configuration files and thousands of variables
to copy from text files then to adjust in some platforms, a huge number of poorly-configured servers
and hundreds of servers to manage.. I wanted to work on the automation of weekly procedures, I
was aware that this is the solution but it was not the priority of our client.
It was a position within a team of ten people working on the integration of a number of heavy
applications (mainly Java/Oracle, php/Mysql, Nginx, Python/Jython) with a complex architecture
and thousands of versions to manage. At the beginning, I was obliged to follow my boss guidelines
and the work methods my team have adopted which have one goal : satisfy as soon as possible the
unceasing demands of the client.
No, but .. wait, this is not good at all!
I spent almost two weeks searching and working on some solutions before I convinced my boss to
give me the time to set up an application prototype that will ease the heavy load, accelerate daily
procedures and reduce human errors.
First, I created a configuration management tool using Python/Sqlite3, automated tests using
Selenuim/Python among other procedures I have set up. Some weeks later I started learning SaltStack
and found it later the perfect solution that can take place of the tools I developed and meets the
expectations of the integration process, or rather, the continuous integration, deployments and
automatic tests.
I hesitated between several alternatives: Puppet, CFEngine, Chef .. etc. The choice was made based
on several criteria (I was looking for a robust, fast configuration management and remote execution
tool that everybody can use without learning a new programming language (python) ) .. I have never
regretted my choice.
I found some difficulties and honestly when I started learning Salt, the official documentation was
not very detailed. This book is a fruit of long hours of work and self-learning.
Well, in the beginning I wanted to resign from my job, just few days after discovering of Salt, I was
in love with my job, with what I was doing and with SaltStack.
Preface 3

I tried SaltStack first time when I saw my team taking more than 3 days to configure hosted platforms
at each deployment. After setting it up, the same procedure was taking less than 1/2 hour.
Through this book, it’s your turn to discover SaltStack, I will be your guide.
I wish you a pleasant reading.

To Whom Is This Book Addressed ?


To developers, system administrators and anyone faced to work in one of these environments in
collaboration with the other or simply in an environment that requires knowledge in development,
integration and system administration.
The most common idea is that developers think they are here to serve the machines by writing
code and applications, systems administrators think that machines should works for them simply
by making them happy (maintenance, optimization ..etc ).
Moreover, within the same company there is generally some tension between the two teams:
System administrators accuse developers to write code that consumes memory, does not meet system
security standards or not adapted to available machines configuration.
Developers accuse system administrators to be lazy, to lack innovation and to be seriously uncool!
No more mutual accusations, now with the evolution of software development, infrastructure and
adopted methodologies (such as Scrum, XP, Kanban), the concept of DevOps was born.
DevOps is more a philosophy and a culture than a job (even if some of the positions I occupied
were called “DevOps”). By admitting this, this job seeks closer collaboration and a combination
of different roles involved in software development such as the role of developer, responsible for
operations and responsible of quality assurance. The software must be produced at a frenetic pace
while at the same time the developing in cascade seems to have reached its limits.

• If you are a fan of configuration management, automation and the DevOps culture
• if you are a system administrator working on DevOps or SysOps
• If you are a developer seeking to join the new movement

This book is addressed to you. Configuration management are one the most used tools in DevOps
environments.

• And if you are new to configuration management softwares, this book is also addressed to
beginners.
Preface 4

Conventions Used In This Book


Basically, this is a technical book where you will find commands (SaltStack commands) and code
(Python, YAML, Jinja2 ..etc).
Commands and code are written in a different format.
Example :

1 python -c 'import urllib; print urllib.urlopen("https://bootstrap.SaltStack.com"\


2 ).read()' | \ sudo sh -s -- git develop

• This book uses italic font for technical words such as librairies, modules, languages names.
The goal is to get your attention when you are reading and help you identify them.
• You will find two icons, I have tried to be as simple as possible so I have chosen not to use too
many symbols, you will only find:

To highlight useful and important information.

To highlight a warning or to prevent.

How To Properly Enjoy This Book


This book contains technical explanations and shows in each case an example of a command or
a configuration to follow. The only explanation gives you a general idea and the code that follows
gives you convenience and help you to practice what you are reading. Preferably, you should always
look both parts for a maximum of understanding.
Like any new tool or programming language you learned, it is normal to find difficulties and
confusions in the beginning, perhaps even after. If you are not used to learn new technologies,
you can even have a modest understanding while being in an advanced stage of this book. Do not
worry, everyone has passed at least once by this kind of situations.
At the beginning try to make a diagonal reading while focusing on the basic concepts, then try the
first practical manipulation on your server or just using your laptop and occasionally come back to
this book for further reading on a about a specific subject or concept.
This book is not an encyclopedia but sets out the most important parts to learn and even to master
SaltStack, if you find words or concepts that you are not comfortable with, just try to take your
Preface 5

time and do your own on-line research. Learning can be serial so understanding a topic require the
understanding of an other one.
Through this book, you will learn how to install configure and use SaltStack. Just before finishing
the book, you will go through a chapter where good examples of a practical use cases are explained.
Through this chapter, try to showcase your acquired understanding, and no, it will not hurt to go
back to previous chapters if you are unsure or in doubt.
Finally, try to be pragmatic and have an open mind if you encounter a problem. The resolution
begins by asking the right questions.

How To Contribute To This Book ?


This work is in progress. I am an adopter of the lean philosophy so the book will be continuously
improved in function of many criteria but the most important one is your feedback.
If you have any suggestions please do not hesitate to contact me, you can find me on Twitter1 or
you can use my blog contact page2 .
If you want a better tracking of issues and recommendations about this book, I recommend using
this github repository3 .
This book is not perfect, so you can find typo, punctuation errors or missing words.
Every line of the used code was tested before (except may be some errors that you can find due to
a difference of software versions or environment configurations ).

About The Author


Aymen El Amri4 is a DevOps Architect engineer, actually working as head of IT systems and DevOps
in a Parisian startup. He worked on development and system engineering for companies and startups.
He is interested in DevOps philosophy, the lean programming and the tools/methodologies that
comes with since his last experiences in this domains were successful.
He co-founded some projects in connection with the community of Free and Open Source Software,
an infrastructure provider startup and he was involved in hackerspaces and other political/social
movements in relation with technologies.
Find his blog at www.eon01.com/blog
1
https://twitter.com/eon01
2
http://eon01.com/blog/contact-me/
3
https://github.com/eon01/SaltStackForDevOps/issues
4
http://eon01.com
Introduction

1 \ ^__^
2 \ (--)\_______
3 (__)\ )\/\
4 ||----w |
5 || ||

Configuration Management And Data Center


Automation
Configuration management and data center automation are mechanisms to manage a technical
description of a system, its components and changes in its configurations.
This happens during the life cycle of a system or during its different processes. Source code
deployment on one or multiple environments is a change and it requires the right configuration
management and provisioning tool.
An ecosystem composed of development, test, QA and production environments is an example of a
slightly complex ecosystem that can be composed many servers, sometimes thousands or hundreds
of servers.
From the development, integration, test and staging, to the production environment, configuration
management becomes a process of normalization of the application configuration according to the
state of infrastructure and functional components and other requirements. The same process of
normalization should be ensured when changing the environment, the infrastructure or when the
configuration itself changes.
Imagine that your organization is developing some applications with dependencies between them:
databases, different configurations between developer’s, staging and production environments..etc
To accomplish this project with a good time-to-market, you may need tools to:

• A tool to rebuild the same server quickly from provisioned images


• Generate dynamic configurations for the same application on different environments
• Execute commands on thousands or hundreds of servers from one central machine
• Create advanced monitoring and reactors systems
Introduction 7

• Recover quickly from disasters


• etc ..

Configuration management tools like Salt are useful to the use cases quoted below and can do more
than this.
In fact, with the adoption of agile development methods, the process of development; test and
deployment of a software component has accelerated, therefore methods of management have
become faster, more automated and more adapted to changes.

DevOps Evolution

Even if many specialists consider provisioning, change management and automation a business
matter, not an IT one but to make this happen, some special technical skills are required. That’s why
new positions, teams or rather culture in the IT industry have emerged: DevOps.
The illustration below (taken from Wikipedia) shows the essence of the DevOps philosophy.

DevOps as the intersection of development (software engineering), technology operations and quality assurance
(QA)

Automation is important to the success of critical IT processes that are part of the life cycle
of a product, including provision, change management, release management, patch management,
compliance and security. Therefore, having the right technical skills is important to any “lazy but
pragmatic SysAdmin”.
Introduction 8

This book will help you to learn one of the most known IT automation configuration management
and infrastructure automation/orchestration tools.

DevOps Tooling
Currently, several FOSS and proprietary automation and configuration management tools exists.
Choosing one of these tools is based on several criteria.

Choice Criteria
• Performance : Among memory consumption, speed of execution and adaptation to increas-
ingly complexes architectures, several performance criteria could help you decide choosing
the right tool.
• License : You may choose between FOSS and proprietary software. Most of the existing
software are Open Source. It remains to be seen what FOSS license you should choose: GPL,
BSD, Apache, MIT..etc
• Programming Language : A such tool is coded using a programming language, but it does not
mean that users will manage and automate operations and servers using the same language.
For example, SaltStack is written in Python but its users use Jinja and YAML ..etc

Most of tools are written in Python, Ruby, Java or Golang, but one can also find perl, C and C ++
based tools.

• Authentication Methods : A configuration management or a data center automation tool is


based on a model, roughly consisting of clients and a server. The authentication between a
client and a server can be automatic, encrypted, secure, fast .. or not.
• Agents : Some tools use agents that must be installed on target servers (clients), some tools
do not require remote agents and others offer both choices.
• Scalability : A tool that grows and evolves with the enterprise must provide technical means
and capabilities to ensure scalability at several features and extended functional scopes.
• Portability : Most if not all popular configuration management tools are compatible with nix
systems. Some servers runs on *BSD, AIX, HP-UX, Mac OS, Solaris, Windows and other OSs.
In this case you must study compatibility.
Introduction 9

AIX BSD HP- Linux Mac Solaris WindowsOthers


UX OS X
Ansible Yes Yes Yes Yes Yes Yes Yes Yes
Bcfg Partial Yes No Yes Partial Yes No No
CFEngineYes Yes Yes Yes Yes Yes Yes Yes
cdist Yes Yes Yes No
Chef Yes Yes Yes Yes Yes Yes Yes Yes
ISconf Yes Yes Yes Yes Yes Yes No No
Juju Yes
LCFG No No No Partial Partial Partial No No
OCS Yes Yes Yes Yes Yes Yes Yes No
Inven-
tory
NG
Opsi No No No Yes No No Yes No
PIKT Yes Yes Yes Yes Yes Yes No Yes
Puppet Yes Yes Yes Yes Partial Yes Yes Yes
Quattor No No No Yes Partial Yes No No
Radmind Yes Yes No Yes Yes Yes Yes No
Rex Yes Yes Yes Yes Yes No
Rudder Yes Partial No Yes Partial Partial Yes Yes
Rundeck Yes Yes Yes Yes Yes Yes Yes No
SmartFrogNo No Yes Yes Yes Yes Yes No
Salt Yes Yes Partial Yes Yes Yes Yes Partial
SpacewalkNo No No Yes No Yes No No
STAF Yes Yes Yes Yes Yes Yes Yes Yes
Synctool Yes Yes Yes Yes Yes Yes Yes Yes
Vagrant Yes Yes Yes

Thanks to Wikipedia5 and its contributors for the this comparison concerning the portability of the
following tools.

• Documentation, Support and Latest Stable Release: Keep in mind that the quantity and
the quality of the official documentation, forums, groups, and paid support differs from a
tool to another. A good thing to do is to see the date of the latest stable release, some tools
are no more updated which can cause security risks, lack of support, bugs and problems of
integrations with other tools.

Popular tools
Among the popular tools we can find : Ansible, CFEngine, Puppet/Chef and SaltStack.
5
http://en.wikipedia.org/wiki/Comparison_of_open-source_configuration_management_software
Introduction 10

Illustration 2: Popular configuration management and automation tools

Ansible : Combines multi-node deployment and ad-hoc task execution. It manages nodes with SSH
and requires Python (2.4 or later). It uses JSON and YAML modules and states as descriptions. It is
built on Python but its modules can be written in any language. Ansible6 is used by Spotify, Twitter,
NASA and Evernote.
Puppet : Puppet7 is based on a custom declarative language to describe the system configuration,
it uses a distributed client-server paradigm, and a library for configuration tasks. Puppet requires
the installation of a master server and client agents on every system that needs to be managed. It is
used by Vmware, Cisco, Paypal and SalesForce.
SaltStack : Salt8 is what the next chapters of this book will detail. It is used by Rackspace,
Photobucket, Nasa, LinkedIn, Hulu, HP Cloud Services, Harvard University, CloudFlare ..etc
6
http://www.ansible.com/home
7
http://puppetlabs.com/
8
http://saltstack.com/
 Getting Started With SaltStack

1 \ ^__^
2 \ (--)\_______
3 (__)\ )\/\
4 ||----w |
5 || ||

Presentation
Salt is an Open Source project, you can read and modify its source code under the Apache license.
Its source code is available on github9 .
SALSTACK Inc. is the company behind Salt, it was founded by Thomas Hatch, the original creator
of SaltStack.
Salt is used by Apple inc, Rackspace, Photobucket, NASA, LinkedIn, Hulu, HP Cloud Services, Cloud
Flare and other know companies.

SaltStack Logo

Salt fundamentally improves the way SysAdmins, integrators and DevOps configure and manage
all aspects of a modern data center infrastructure.
9
https://github.com/saltstack/salt
 Getting Started With SaltStack 12

It provides a different approach to some existing alternatives such as speed and adaptation to the
size of a cloud. Several recognized businesses use SaltStack to orchestrate and control their cloud
servers and infrastructure and automate the “DevOps Toolchain”.
It is built on a platform running relatively fast while enabeling remote-controlling distributed infras-
tructures, code and data. A layer of security is established while having two-way communication
between the different components of the platform (masters, minions ..etc).
The following chapters are conceived for begginers and experienced system administrators, DevOps
professionals and developers seeking to manage and configure multiple servers/applications and
software platforms easily.
The infrastructure to manage can be in-premise virtual machines, cloud (Amazon EC2 instances,
Rackspace ..etc), containers (e.g. Docker) or bare-metal machines as well as hosted applications and
platforms that rely on configuration files.
All you need is a root access, a good understanding of the environment to manage and some basic
knowledge (command line, basic commands, linux ..etc).
Even if it is possible to use a web access to manage Salt but the use of the command line is always
more adapted to our needs for several reasons such as speed and efficiency. If you are familiar with
CLIs, understanding Salt commands and its syntax will be easier.
Salt is portable and works with these systems:

• Amazon Linux 2012.09


• Arch, CentOS 5/6
• Debian 6.x/7.x/8(git installations only)
• Fedora 17/18
• FreeBSD 9.1/9.2/10
• Gentoo
• Linaro
• Linux Mint 13/14
• OpenSUSE 12.x
• Oracle Linux 5/5
• Red Hat 5/6
• Red Hat Enterprise 5/6
• Scientific Linux 5/6
• SmartOS
• SuSE 11 SP1/11 SP2
• Ubuntu 10.x/11.x/12.x/13.04/13.10
• Elementary OS 0.2
 Getting Started With SaltStack 13

According to the official website, other systems and distributions will be compatible in the future.
If you want to stay informed just follow the development branches10 .
In the following sections, we will be most of the time using Linux. If you are using Macos, there is
no real differences for the installation but if you are using Windows, I recommend using Ubuntu on
Windows, this is optional if you would like to keep using Windows shell.
Windows users, please follow this guide11 to install Ubuntu and Bash.
You can find some diffrences in the installation of Salt according to your system: Windows, FreeBSD
or Solaris..etc. Overall, principles and usage are the same.
You can use Salt installed on an operating system to manage other systems (A Linux to manage a
Solaris or a BSD to manage a Windows … etc.).
The installation part of this book will cover Redhat and Debian.
Be sure to check the documentation (docs.saltstack.com) for the installation and the specific use with
your particular operating system.

A brief summary
SaltStack is based on some special components:

• One or more salt-master, salt-minion and salt-syndic


• A key management system salt-key that allows the authentication of a salt-minion on a salt-
master
• A system of states to describe configurations
• A top.sls that calls states
• A system of grain on the minion to manage configurations data
• A system of pillars to store other data on the master (such as confidential data)
• A transport and data management system called ZeroMQ
• An event management system called reactors
• Other components like returners, outputters ..etc

A master can manage configurations or execute remote commands on one or more minions. These
operations are based on SLS files, and these files are calling Salt modules, grains and/or pillars.
Salt could be used either from the command line (Salt CLI) or in executable scripts (Salt API).
The various components of SaltStack will be explained in this book, some definitions appeal others,
that’s why we need - in the first order - to have a global view about Salt.
10
https://github.com/saltstack/salt/branches/all
11
https://www.howtogeek.com/249966/how-to-install-and-use-the-linux-bash-shell-on-windows-10/
 Getting Started With SaltStack 14

Expanding Salt Use


Some provisioning and testing tools are based on Salt, you may find some or all of them interesting.
SaltStack can be interfaced with different cloud providers, integrations tools and containers .. etc.
Some of the possible integrations are :

Salt + Vagrant
Salty Vagrant is a Vagrant plugin that allows you to use Salt as a provisioning tool. You can use
formulas and existing configurations to create and build development environments.
The simplest way to use Salty Vagrant is configuring it to work in masterless mode. Details are
explained in the official Vagrant documentation12 .
Through this book, you will learn how to interface Vagrant with Salt in order to automate the
provisioning of virtual machines.

Salt Cloud
Salt Cloud 13 is a public cloud provisioning tool created to integrate Salt to the major cloud
infrastructure providers (AWS, Rackspace, GCP, Azure ..etc) in order to facilitate and accelerate
the supply process.
Salt Cloud allows managing a cloud infrastructure based on maps and profiles of virtual machines.
This means that many virtual machines in the cloud can be managed easier and faster.
We will see in this book how to integrate Salt with some Cloud providers (e.g. Linode, AWS)

Halite
Halite14 is the client-side web interface (Salt GUI). It connects and operates a SaltStack infrastructure.
This tool is a graphical complement, but it is not indispensable for the functioning of Salt. For best
results, Halite works with Hydrogen and higher versions.

 Salt + Docker
Salt can be used to provision and manage Docker containers (Dockerng / Dockerio). We are going
to see toegether how to configure and manage Docker containers using Salt.
12
https://docs.vagrantup.com/v2/provisioning/salt.html
13
https://github.com/saltstack/salt-cloud
14
https://github.com/saltstack/halite
 Getting Started With SaltStack 15

Conclusion
The general presentation of Salt is not enough to start using Salt, but it is required if you are not
familiar with concepts like configuration management and data center automation.
Installation

Introduction
Installation differs according to the operating system: Arch Linux, Debian, Fedora, FreeBSD, Gentoo,
OS X, RHEL / CentOS / Scientific Linux / Amazon Linux / Oracle Linux, Solaris, Ubuntu, Windows
and SUSE …etc.
Salt is made of masters and slaves typically masters and minions. In the jargon used by the editor,
salt-master is the master and salt-minion is the minion. A syndic called “salt-syndic” is part of the
package and its role consists of connecting masters.

Dependencies
Salt has external dependencies and they are:

• Python 2.6 >= 2.6 <3.0


• msgpack-python - An efficient message exchange format
• YAML - Python YAML
• Jinja2 - Jinja2 a template engine.
• MarkupSafe - Implements a XML/HTML/XHTML markup safe string for Python.
• apache-libcloud - Python lib for interacting with many of the popular cloud service providers
• Requests - HTTP library

Depending to the chosen transport method (ZeroMQ or RAET ), dependencies may vary :

• ZeroMQ:
– ZeroMQ >= 3.2.0
– pyzmq >= 2.2.0 - ZeroMQ Python bindings
– PyCrypto - The Python cryptography toolkit
– M2Crypto - “Me Too Crypto” - Python OpenSSL wrapper
• RAET :
– libnacl - Python bindings to libsodium
– ioflo - The flo programming interface raet and salt-raet is built on
Installation 17

– RAET - UDP protocol

There are other optional dependencies such as:

• mako - an optional parser for Salt states (configurable in the master settings)
• gcc - for dynamic Cython module compiling

If you will use salt-cloud for cloud infrastructure management or dockerng with Docker machines,
you may install other packages and you will have other dependencies, this is explained later in the
book.
By default, Salt uses ZeroMQ. Its official website15 defines it as :
ZeroMQ \zero-em-queue\, \ØMQ: - Connect your code in any language, on any platform. - Carries
messages across inproc, IPC, TCP, TPIC, multicast. - Smart patterns like pub-sub, push-pull, and
router-dealer. - High-speed asynchronous I/O engines, in a tiny library. - Backed by a large and active
open source community. - Supports every modern language and platform. - Build any architecture:
centralized, distributed, small, or large. - Free software with full commercial support.

Installation
If your server has Internet access, this shell script will be convenient to automate the installation of
the stable version:

1 wget -O - http://bootstrap.saltstack.org | sudo sh

You can also install the latest version from Github :

1 curl -L https://bootstrap.saltstack.com -o install_salt.sh


2 sudo sh install_salt.sh git develop

or:

1 python -m urllib "https://bootstrap.saltstack.com" | sudo sh -s -- git develop

Note that the last version downloaded directly from Github is the development version. If
you need a version for your production systems, do not do this.
You can also try the Pythonic way:
15
http://zeromq.org/
Installation 18

1 python -c 'import urllib; print urllib.urlopen("https://bootstrap.saltstack.com"\


2 ).read()' | \
3 sudo sh -s -- git develop

The installation script is provided by the official documentation of Salt.

Advanced Installation
The saltbootstrap script16 offers plenty of command line options.
The installation script can be called as follows:

1 sh bootstrap-salt.sh –h

and allows 3 types of installation:

• stable (default)
• daily (specific to Ubuntu)
• git

Examples:

1 bootstrap-salt.sh
2 bootstrap-salt.sh stable
3 bootstrap-salt.sh daily
4 bootstrap-salt.sh git
5 bootstrap-salt.sh git develop
6 bootstrap-salt.sh git v0.17.0
7 bootstrap-salt.sh git 8c3fadf15ec183e5ce8c63739850d543617e4357

Script Options:

16
http://docs.saltstack.com/en/latest/topics/tutorials/salt_bootstrap.html
Installation 19

1 -h #Display this message


2 -v #Display script version
3 -n #No colors.
4 -D #Show debug output.
5 -c #Temporary configuration directory
6 -g #Salt repository URL. Default: git://github.com/saltstack/salt.git
7 -k #Temporary directory holding the minion keys which will pre-seed the master
8 -M #Also install salt-master
9 -S #Also install salt-syndic
10 -N #Do not install salt-minion
11 -X #Do not start daemons after installation
12 -C #Only run the configuration function. This option automatically bypasses a\
13 ny installation.
14 -P #Allow pip based installations.
15 -F #Allow copied files to overwrite existing(config, init.d, etc)
16 -U #If set, fully upgrade the system prior to bootstrapping salt
17 -K #If set, keep the temporary files in the temporary directories specified w\
18 ith -c and -k.
19 -I #If set, allow insecure connections while downloading any files. For examp\
20 le, pass '--no-check-certificate' to 'wget' or '--insecure' to 'curl'

Installation For Test


If you want to install Salt to test its various features, two possibilities are open to you.

Masterless Salt
Installing a masterless salt is about installing a salt-minion without a salt-master. Functionally, it
makes no difference if your goal is to test.
This is a great way to quickly test the basics of SaltStack. The execution of Salt different commands
are done using salt-call that runs on the minion and displays the execution traces. More details about
salt-call will be addressed later.
The minion contains extended features to enable it to run in standalone mode. An autonomous
minion can be used to do a certain number of things:

• Creating a master server (salt-master) using states (salting a salt-master)


• Using salt-call commands in a system without connectivity to a master system (salt-master)
• Applying masterless states : states executed entirely from local configuration files
Installation 20

If it is your first time to use and learn Salt, you may find some new words that you will not
understand for now but they will be explained later.

Testing SaltStack On Your Localhost


Another simple alternative is to install a master and a minion on the same machine:

1 yum install salt-master salt-minion

The idea of running both on the same machine is very useful for quickly testing all the functionalities
of Salt while keeping the master/slave or master/minion model.

Practical Installation Case


Other installation methods are detailed in the official website17 . We are going to detail the installation
in RHEL (Redhat Enterprise Linux) and its derivatives OS (CentOS/Scientific Linux/Amazon
Linux/Oracle Linux) as well as Debian.

Using “PIP”
Since SaltStack is available on PyPi repository, it can be installed using pip (Python Package
Management System):

1 pip install salt

This will not work unless pip is installed. Assuming that your OS package manager is yum:

1 yum -y install python-pip

Once pip is installed, you can create an alias pip to pip-python

1 $ echo 'alias pip="/usr/bin/pip-python"' >> $HOME/.bashrc


2 $ . $HOME/.bashrc

Redhat : Using RHEL EPEL


Salt is available on EPEL ( The Extra Packages for Enterprise Linux ) since its version 0.9.4 and thus
can be installed using yum. Above all, you must enable EPEL on RHEL.
For RHEL 5:
17
http://salt.readthedocs.org/en/latest/topics/installation/index.html
Installation 21

1 rpm -Uvh http://mirror.pnl.gov/epel/5/i386/epel-release-5-4.noarch.rpm

For RHEL 6:

1 rpm -Uvh http://ftp.linux.ncsu.edu/pub/epel/6/i386/epel-release-6-8.noarch.rpm

Then, install salt-master and salt-minion on their respective machines (or both of them on your
localhost).
On the master server :

1 yum install salt-master

On the minion server:

1 yum install salt-minion

Further details about RHEL installation are available in the official documentation18 .
If you want your master to start-up with the OS:

1 chkconfig salt-master on

To start it:

1 service salt-master start

Do the same thing for salt-minion :

1 chkconfig salt-minion on
2 service salt-minion start

A master or a minion service is a daemon that can be started respectively by the following
commands:

1 salt-master -d
2 salt-minion -d

In the list of system processes, you should find the process running the following command for the
master:
18
http://salt.readthedocs.org/en/latest/topics/installation/rhel.html
Installation 22

1 /usr/bin/python /usr/bin/salt-master

and

1 /usr/bin/python /usr/bin/salt-minion

for the minion.


To shutdown the master or the minion, simply send a termination signal to the pid of the process:

1 kill -9 3139
2 kill -SIGTERM 3139

Or use the command pkill followed by the name of the process :

1 pkill salt-master
2 pkill salt-minion
3 pkill salt-syndic

You have also the possibility to use services.

1 service salt-master stop


2 # or using systemctl
3 systemctl stop salt-master

Normally the dependencies will be automatically installed and in the opposite case install them
manually, reinstall Salt or start a reconfiguration of packages dependencies using the package
manager of your operating system.

Using Python Virtual Environment (virtualenv)


This method creates a Python development environment and its purpose is to install packages and
libraries without touching the same libraries on your operating system. It is recommended to use
virtualenv when it comes to install the development version of SaltStack.

1 git clone https://github.com/saltstack/salt

Start by installing virtualenv:


Installation 23

1 wget http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm
2 rpm -i epel-release-5-4.noarch.rpm
3 rm epel-release-5-4.noarch.rpm

Create the development environment:

1 virtualenv /path/to/your/virtualenv

Then turn it on:

1 source /path/to/your/virtualenv/bin/activate

Install SaltStack and its dependencies:

1 pip install M2Crypto


2 pip install pyzmq PyYAML pycrypto msgpack-python jinja2 psutil
3 pip install -e ./salt #The path to the salt git clone from above

Create the needed configuration files for the master and the minion:

1 mkdir -p /path/to/your/virtualenv/etc/salt
2 cp ./salt/conf/master /path/to/your/virtualenv/etc/salt/master
3 cp ./salt/conf/minion /path/to/your/virtualenv/etc/salt/minion

This is not our main topic, I invite you to learn more about using virtualenv in the on-line
documentation19 .

Debian : Installation Of SaltStack On Debian Squeeze


Start by creating a file under sources.list.d:

1 /etc/apt/sources.list.d

then add the following lines :

19
http://docs.saltstack.com/en/latest/topics/development/hacking.html
Installation 24

1 deb http://debian.saltstack.com/debian squeeze-saltstack main


2 deb http://backports.debian.org/debian-backports squeeze-backports main contrib \
3 non-fre

You can also add them directly to:

1 /etc/apt/sources.list

Import the key used for the signature:

1 wget -q -O- "http://debian.saltstack.com/debian-salt-team-joehealy.gpg.key"| apt\


2 -key add -

and to finish :

1 apt-get update
2 apt-get install salt-master
3 apt-get install salt-syndic
4 apt-get install salt-minion

Upgrading SaltStack
According to the method used to install Salt, upgrading could be different from one package manager
to another.

The important point to remember in this part is that the master must be upgraded first,
followed by the minions.
The compatibility between an upgraded master and minions having an earlier version is managed
by SaltStack, so everything will work perfectly except the case where the update concerns security
or vulnerabilities patches on minions.
Just use :

1 salt --versions

to have the version of Salt and its dependencies.


Example :
Installation 25

1 Salt: 2014.7.0
2 Python: 2.7.3 (default, Mar 14 2014, 11:57:14)
3 Jinja2: 2.6
4 M2Crypto: 0.21.1
5 msgpack-python: 0.1.10
6 msgpack-pure: Not Installed
7 pycrypto: 2.6
8 libnacl: Not Installed
9 PyYAML: 3.10
10 ioflo: Not Installed
11 PyZMQ: 13.1.0
12 RAET: Not Installed
13 ZMQ: 3.2.3
14 Mako: 0.7.0
Configuration And Troubleshooting

Introduction
In the remainder of this guide, we will use a Linux virtual machine where we will install the salt-
master and the salt-minion (in the same machine). The basic concepts are explained here so that
there will be practically neither functional nor conceptual differences between having master and
minion in the same machine and having them in two different machines.
The only differences that might exist are easily noticeable such as the configuration of IP addresses
(interfaces).
For the configuration, we will follow those key steps:

• The configuration of the salt-master


• The configuration of the salt-minion
• The configuration of salt-key
• Functional tests
• Diagnostics and troubleshooting

Configuration files for master and minion are in the default configuration directory:

1 /etc/salt

Configuration Of A Masterless Minion


To allow a minion to execute Salt commands without a master, the option file_client must be
configured. By default it is set to the remote execution mode so that a minion consults a master
before running a command.
file_client shall be set to local. In the minion configuration file /etc/salt/minion find :
Configuration And Troubleshooting 27

1 # Set the file client. The client defaults to looking on the master server for
2 # files, but can be directed to look at the local file directory setting
3 # defined below by setting it to local.
4 # file_client: remote

Uncomment file_client: remote and set it to local.

1 file_client: local

You can find more detail in the salt-minion section of the 4th chapter.

Configuration Of salt-master
In the configuration file /etc/salt/master the default address for the listening interface (binding) is
set to 0.0.0.0:

1 interface: 0.0.0.0

You can adapt it to your specific context.


Example: You can change it and restrict it to your localhost (127.0.0.1):

1 interface: 127.0.0.1

Make sure that your system settings are well configured, think of hosts file for example.
In our case we will keep the default value of the interface which means 0.0.0.0 and the defaults ports
values:

1 publish_port: 4505
2 ret_port: 4506

publish_port is the network port and ret_port is the port of the execution and the feedback of
minions.
Same thing for file_roots, we are just keeping its default value:
Configuration And Troubleshooting 28

1 file_roots:
2 base:
3 - /srv/salt/

The base environment is always required and must contain the top file. It is basically the entry point
for Salt and will be used if no environment is specified. This will be detailed later.

Keep in mind that it is obligatory to configure your base environment.


To deliver files to minions, Salt uses a lightweight file server written in ZeroMQ, this server is
running on the master and requires a dedicated port.
The file server is based on the environments declared in the master configuration file. Each
environment can have multiple root directories.
Example :

1 file_roots:
2 base:
3 - /srv/salt/
4 dev:
5 - /srv/salt/dev/subfolder1/
6 - /srv/salt/dev/subfolder2/
7 prod:
8 - /srv/salt/prod/subfolder1
9 - /srv/salt/prod/subfolder2

A base environment must have a top file top.sls.


For the sake of simplicity, even if we need to configure different environments, we will keep a single
environment in the master configuration file.

1 file_roots:
2 base:
3 - /srv/salt/

We will focus instead on how those environments are actually implemented and for that we will a
state tree for each one of them. We are going to have multiple state trees designating each one of
our environments.
For the time being, restart the master:
Configuration And Troubleshooting 29

1 service salt-master restart

Configuration Of salt-minion
The address of master machine should be configured in the minion configuration file /etc/salt/min-
ion.

1 master: 127.0.0.1

Keep in mind that the value of master_port should coincide with the value of ret_port
previously configured on the master.
A minion may have a nickname, you are free to change it by uncommenting id: and changing its
value but by default the minion id/nickname is the name of the machine (obtained with the linux
command hostname).
This value is stored in the file:

1 /etc/salt/minion_id

Once all settings are made, it is recommended to restart the daemon:

1 service salt-minion restart

Post Installation
A very important thing to do after the installation and before the configuration is to let the master
accept the minion. To do this, you should use salt-key.
Salt-key is the public key management system server. A master can communicate with a minion
only when the public key of the latter is processed and accepted by salt-key.
SaltStack automatically handles the key generation for running minions, the server will be notified
of the presence of a minion via salt-key.
A minion can have three different states :

• **Accepted **: The public key of the minion is accepted and thus registered on the server.
• **Unaccepted **: The public key of the minion has not been accepted yet.
• **Rejected **: This is usually a problem of installation or configuration.

After having set up the master and the minion, type the following command to list all minions and
their associated states:
Configuration And Troubleshooting 30

1 salt-key -L

or

1 salt-key --list all

Accepted minions are ready to receive orders from the the master. Otherwise, use the same command
but followed by the following parameters and the id of the minion:

1 salt-key -a <name_of_the_minion>

The last command add a minion to the list of accepted minions.


You can also use the following parameter if you want to accept all of the configured minions:

1 salt-key -A

Or

1 salt-key --accept-all

Example:
To add a minion mynode we should type the following command:

1 salt-key -a 'mynode'

name of the minion and mynode are used for those examples. You should change it by the
name of the minion you are using . Remember salt-key -L can help you list the names of all minions.

General Troubleshooting And Prerequisites Checklist


SaltStack offers many possibilities20 to diagnose a problem. We are going to see the most commonly
used methods for commons problems.
Before launching the master and the minion(s), a list of prerequisites should be checked:
20
http://docs.saltstack.com/en/latest/topics/troubleshooting/
Configuration And Troubleshooting 31

• All of your salt-master and salt-minion are installed with the with satisfied dependencies
• Python version is compatible with the version of SaltStack salt-master and salt-minion should
be running with the root user
• Communication and execution ports are well configured and opened
• The configuration of any firewall between a master and a minion should be taken into account
(Example : check your iptables).

The first test to do is to ping the minion from the master machine:

1 salt <minion_id> test.ping

To ping all the accepted minions use :

1 salt « * » test.ping

A True should appear before each minion that answers.

Troubleshooting salt-master
The first diagnostic step is to launch the salt-master process in debug mode :

1 salt-master -L debug

The output of this command is very important because it contains helpful information about the
majority of the problems you can encounter when using Salt.

Troubleshooting salt-minion
Like salt-minion, salt-master can be also executed in debug mode.

1 salt-minion -L debug

The output will be the set of debug, warning and error messages.

Troubleshooting Ports
For salt-master, both TCP ports 4505 and 4506 should be opened. By running debug mode,
information about those ports should be seen in the debug output.
For salt-minion, there are no specific ports to be opened but connectivity must be established :
Configuration And Troubleshooting 32

1 nc -v -z salt.master.ip.address 4505
2 nc -v -z salt.master.ip.address 4506

In the case where a salt-master and a salt-minion are installed on two different production machines,
most of the cases your server is behind a firewall, check that it allows the establishment of the
connection between them.
On a Redhat system, iptables must allow the following configuration:

1 -A INPUT -m state --state new -m tcp -p tcp --dport 4505 -j ACCEPT


2 -A INPUT -m state --state new -m tcp -p tcp --dport 4506 -j ACCEPT

21
If you have troubles with setting your firewall using iptable file, consider using “ufw”
(Uncomplicated FireWall).

Using salt-call
salt-call is a good way to debug. The command can be used for example with the states or highstates.
Example:

1 salt-call -l debug state.highstate

When salt-call is executed on a minion, no salt-master is called during the execution. This command
allows the local execution, development assistance and more verbose output.
If you want more verbosity, increase the log level.
Example:

1 salt-call -l debug state.highstate

Like it is said in SaltStack official documentation about this functionality:

“The main difference between using salt and using salt-call is that salt-call is run from
the minion, and it only runs the selected function on that minion. By contrast, salt is
run > from the master, and requires you to specify the minions on which to run the
command using salt’s targeting system.”
21
http://en.wikipedia.org/wiki/Uncomplicated_Firewall
Configuration And Troubleshooting 33

A Note About Redhat Enterprise Linux 5


Salt works with Python 2.6 or 2.7 while on RHEL5, python 2.4 is installed by default. When you
install Salt from git, it is recommended to install its dependencies via EPEL and run Salt executable
with “Python26” normally found here:

1 /usr/bin/python26
Basic Concepts

Python
Salt is coded in Python22 and also provides a Python client interface. Python is an object-oriented,
multi-paradigm and multi-platform programming language.
It favors the structured, object-oriented, functional and imperative programming. It is strongly typed
and has an automatic memory management with garbage collection and an exceptions management
system; thus it is similar to Perl, Ruby, Scheme, Smalltalk, and Tcl.
All Python releases are Open Source23 . Historically, most but not all Python releases have also been
GPL compatible.
It runs on most computing platforms, mainframe supercomputers, from Windows to Unix through
GNU/Linux, Mac OS, or Android, iOS and also with Java or .NET. It is designed to maximize
programmer productivity by providing high-level tools and a simple syntax.
It is also appreciated by the educators who find it a language with syntax that is clearly separated
from the low-level mechanisms, a language that enables easy introduction to basic programming
concepts.
Python is also widely used in scripting and operations (ops). Several DevOps and system administra-
tors adopted this language as their primary language for scripting because as it has a large standard
library, providing tools suitable for many types of tasks.
Salt is coded in Python and offers the power and flexibility that this language has. Even if Salt can
be used only from the command line, you can also use its Python interface and call all its commands
from Python scripts. Mastering Python is not obligatory but it is recommended to be at least initiated
with this programming language.
A “hello world” in *Python can be written this way:

1 #!/usr/bin/env python
2 print ( 'hello world !')

One of the advantages of working with Python is its community support, its active community and
its large number of users.
22
http://www.python.org
23
http://opensource.org/
Basic Concepts 35

YAML : The Human Readable Serialization Language


YAML is a recursive acronym of “YAML Is not Markup Language”. It is human readable serialization
language (Unicode data). It takes some concepts from other languages such as XML, or the email
format as documented by RFC 2822.
The idea behind YAML is that any data can be represented by a combination of lists, tables (hash)
and scalar data. YAML describes these types of data (YAML representations) and provide a syntax
to describe the data as a stream of characters (the YAML stream).
A computer application passes the YAML stream to YAML representation by a “load operation”. The
representation is passed to the stream using a “dump operation”. These could reminds you of JSON
(load & dump).
The syntax of YAML is relatively simple and effective, less verbose than XML, less compact than the
CSV and was established as more readable by humans as possible.
YAML is easily mapped with common high-level languages (python, ruby, perl ..etc) at the level of
the above data types .
YAML borrows some notations from other languages (JSON, XML, SDL ..):

• Comments start with the hash sign (#) and extend all along the line
• It is possible to include a JSON syntax in a YAML syntax
• Lists of items are denoted by a dash (-) followed by a space - one item per line
• Arrays are represented as “key: value” - one pair per line
• Scalar (strings) can be enclosed in double quotes (“) or single (‘)
• Several documents together in a single file are separated by three hyphens (—); three dots (…)
are optional to mark the end of a document within a stream
• The indentation, with spaces, shows a tree structure
• Tab characters are never allowed as indentation
• etc ..

YAML is easier to read by a human than JSON, it is more focused on data serialization with less
documentation in comparison with XML.
Example:
Basic Concepts 36

1 myApp 1.0:
2 end_of_maintenance: 2014-01-01
3 is_stable: true
4 release_manager: "James Dean"
5 description:
6 This a stable version
7 latest_beta: ~
8 latest_minor: 1.0.17
9 archives: { source1: [zip, tgz], source2: [zip, tgz] }
10
11 myApp 1.1:
12 end_of_maintenance: 2014-06-01
13 is_stable: true
14 release_manager: 'Chuck Norris'
15 description:
16 Chuck Norris can program a computer to breath
17 latest_beta: null
18 latest_minor: 1.2.4
19 archives:
20 source1:
21 - zip
22 - tgz
23 source2:
24 - zip
25 - tgz

phpMyAdmin allows to export a database into a YAML file.

Jinja: A Templating Language


Jinja is a text-based template language and one of the most used template engines for Python.
Jinja is unicode-based and it runs on a wide range of Python versions from 2.4 to current versions
including Python3.
It is similar to Django template engine but provides Python-like expressions to create a template.
According to Wikipedia:

It is a text-based template language and thus can be used to generate any markup as
well as source code. It is licensed under a BSD License. The Jinja template engine allows
Basic Concepts 37

customization of tags, filters, tests, and globals. Also, unlike the Django template engine,
Jinja allows the template designer to call functions with arguments on objects. Jinja,
like Smarty, also ships with an easy-to-use filter system similar to the Unix pipeline.

Let’s move to a basic example of use:

1 from jinja2 import Template


2
3 tmpl = Template(u'''\
4 <!DOCTYPE html>
5 <html>
6 <head>
7 <title>{{ variable|escape }}</title>
8 </head>
9 <body>
10 {%- for item in item_list %}
11 {{ item }}{% if not loop.last %},{% endif %}
12 {%- endfor %}
13 </body>
14 </html>
15 ''')
16
17 print tmpl.render(
18 variable = 'Value with <unsafe> data',
19 item_list = [1, 2, 3, 4, 5, 6]
20 )
21
22 The template above, written in Jinja will give the following HTML code:
23
24 ``` html
25 <!DOCTYPE html>
26 <html>
27 <head>
28 <title>Value with <unsafe> data</title>
29 </head>
30 <body>
31 1,
32 2,
33 3,
34 4,
35 5,
36 6
Basic Concepts 38

37 </body>
38 </html>

Jinja is quite easy to use, and its use with SaltStack is not complicated, this is one of the strengths
of Jinja and therefore Salt.
SaltStack uses this templating engine with files like state files.
The advantage of using Jinja is the use of control structures like conditional or redundant elements.
The use of these structures (if then, for … in, range from … to … … etc.) makes the repetitive or the
conditional templating tasks accessible, fast and easy to use.
In a state file (SLS: SalStack states), we can employ both YAML and Jinja.
Below is are SLS files:

1 /home/user/project/integ/myApp/4.2.2/myScript.py:
2 file.managed:
3 - source: salt://integ/myApp/4.2.2/myScript.py
4 - mode: 644

In our case, /home/user/project is already defined in our grains file.

1 grains['destination_dir']: /home/user/project

With Jinja, we can use the next code:

1 {{ grains['destination_dir'] }}/integ/myApp/4.2.2/myScript.py:
2 file.managed:
3 - source: salt://integ/myApp/4.2.2/myScript.py
4 - mode: 644

The output is the same, since Jinja templating engine will render the code between {{ and }} and will
replace it by /home/user/project.

Don’t worry if you haven’t understood the whole code above; you will.. For the moment,
just focus on how to use Jinja.
Jinja Loading Util which belongs to salt.renderers.jinja module is the tool that reads and parses Jinja
template files/code and it “process” the code to give it its value.
Since Salt Formulas are pre-written Salt states, Jinja is used to write formula states and it is very
helpful.
Basic Concepts 39

The Master: salt-master


SaltStack is composed of several modules and two of the most important are salt-master and salt-
minion.

salt-master as its name suggests is the master, salt-minion is the listening target. Generally,
at least in this documentation, we denote by salt-master or salt-minion a machine apart (it can be
physical or virtual ..etc). The case where both are installed in the same machine (our localhost) is
an exception - for testing purposes, this does not change almost nothing at a functional level. That
is why we always assume that each salt-master or salt-minion is considered as a machine.

A basic topology of a master and three minions

salt-master is the master daemon that controls one or more minions and can communicate with
other masters (this will be clearer for you with salt-syndic section).
By default, salt-master is configured using the file:

1 /etc/salt/master

Of course you can change the location of the configuration file but we recommend to keep
it by default for the moment to avoid all confusions.
salt-master is launched using the command:

1 salt-master [options]

While you can just use


Basic Concepts 40

1 salt-master

or

1 salt-master -d

The latter command will start the master as a daemon that will launch multiple salt-master
processes, even if ‘worker_threads’ is set to ‘1’.
You can use one of the following options (provided by SaltStack in the official on-line documenta-
tion):

1 --version
2 #Print the version of Salt that is running.
3 --versions-report
4 #Show program's dependencies and version number, and then exit
5 -h, --help
6 #Show the help message and exit
7 -c CONFIG_DIR, --config-dir=CONFIG_dir
8 #The location of the Salt configuration directory. This directory contains the c\
9 onfiguration files for Salt master and minions. The default location on most sys\
10 tems is /etc/salt.
11 -u USER, --user=USER
12 #Specify user to run salt-master
13 -d, --daemon
14 #Run salt-master as a daemon
15 --pid-file PIDFILE
16 #Specify the location of the pidfile. Default: /var/run/salt-master.pid
17 -l LOG_LEVEL, --log-level=LOG_LEVEL
18 #Console logging log level. One of all, garbage, trace, debug, info, warning, er\
19 ror, quiet. Default: warning.
20 --log-file=LOG_FILE
21 #Log file path. Default: /var/log/salt/master.
22 --log-file-level=LOG_LEVEL_LOGFILE
23 #Logfile logging log level. One of all, garbage, trace, debug, info, warning, er\
24 ror, quiet. Default: warning.

Generally, the master starts as a daemon except when it is launched in debugging mode:
Basic Concepts 41

1 salt-master -l debug

The Minion: salt-minion


salt-minion is the module that runs on the machine to manage (our target), it receives orders from
one or more masters, executes them and then communicate back with the status of the execution
and other details.

The case of a masterless minion, lets you use Salt’s configuration management for a single
machine, but not to execute remotely commands or to communicate with other machines.
Note that masterless minion is rich with functionalities, you can even use it to stand up a salt-master
via states or what we call “Salting a salt master”.
Generally a salt-minion is the responsible for commands execution. But to do that, it must have the
accepted state, which means that the salt-master cans communicate securely with it. The latter uses
a key system to communicate safely with one or more minions.
The default configuration file of a minion is in the following file:

1 /etc/salt/minion

Like the master, a minion is launched via the next command:

1 salt-minion [options]

Available options are:

1 --version
2 # Print the version of Salt that is running.
3 --versions-report
4 # Show program's dependencies and version number, and then exit
5 -h, --help
6 # Show the help message and exit
7 -c CONFIG_DIR, --config-dir=CONFIG_dir
8 #The location of the Salt configuration directory. This directory contains the c\
9 onfiguration files for Salt *master* and minions. The default location on most s\
10 ystems is /etc/salt.
11 -u USER, --user=USER
12 #Specify user to run salt-minion
13 -d, --daemon
Basic Concepts 42

14 #Run salt-minion as a daemon


15 --pid-file PIDFILE
16 #Specify the location of the pidfile. Default: /var/run/salt-minion.pid
17 -l LOG_LEVEL, --log-level=LOG_LEVEL
18 #Console logging log level. One of all, garbage, trace, debug, info, warning, er\
19 ror, quiet. Default: warning.
20 --log-file=LOG_FILE
21 #Log file path. Default: /var/log/salt/minion.
22 --log-file-level=LOG_LEVEL_LOGFILE
23 #Logfile logging log level. One of all, garbage, trace, debug, info, warning, er\
24 ror, quiet. Default: warning.

Generally a minion is executed as a background daemon:

1 salt-minion -d

except the case where the debugging mode is activated:

1 salt-minion -l debug

Expanding Salt with salt-syndic


salt-syndic is the module that allows to build and expand Salt topologies. It is used to connect
multiple masters together and runs on a master machine. A salt-syndic listening to a master can
control the minions attached to that master (the master on which the syndic is running).
salt-syndic transforms the default SaltStack topology to other advanced forms, more suited to an
specific architecture or a particular need.
The syndic is a separate daemon to be installed and launched on the master machine. Every syndic
is controlled by a higher-level master.
Starting a salt-syndic background daemon is done in the same way of starting a master or a minion
daemons.
The salt-syndic is configured using the same file as the master:
Basic Concepts 43

1 cat /etc/salt/master|grep syndic


2 # The Salt syndic is used to pass commands through a master from a higher
3 # master. Using the syndic is simple, if this is a master that will have
4 # syndic servers(s) below it set the "order_masters" setting to True, if this
5 # is a master that will be running a syndic daemon for pass-through the
6 # "syndic_master" setting needs to be set to the location of the master server
7 # masters' syndic interfaces.
8 # If this master will be running a salt syndic daemon, syndic_master tells
9 #syndic_master: masterofmaster
10 #syndic_master_port: 4506
11 # PID file of the syndic daemon:
12 #syndic_pidfile: /var/run/salt-syndic.pid
13 # LOG file of the syndic daemon:
14 #syndic_log_file: syndic.log

Just uncomment the part dedicated to salt-syndic configuration and note that:

• syndic_master is the master of the master ip/address


• syndic_master_port is the master of the master ret_port
• syndic_log_file is the path to the logfile
• syndic_pidfile is path to the pidfile

A basic/standard example of a Salt topology using salt-syndic will have a top master (a machine
where only salt-master is installed) and lower-level machines where the master and the syndic
should be installed and configured. The machine/node sitting in the bottom of this hierarchic
topology does not need neither a master nor a syndic, it only needs a configured minion.
Through the topology described above, when a command is issued from a higher-level master, the
listening syndic(s) - those having a lower level - will receive those orders and propagate them to
their minions and listening syndic(s) having a lower-level.

Returned data and executions statuses are generated and dispatched-back using the same
reversed hierarchy route.
In some cases, large infrastructure typologies the feedback of a connected minion, master or a
syndic could take time to roll up, this could cause a timeout for the execution even if the concerned
command is executed in reality but the response driving up through the infrastructure may take
time to arrive. To fix this, just add the following configuration to the master configuration file:

1 syndic-wait: 1
Basic Concepts 44

Another important configuration variable needs to be set to True, the official Salt documentation24
says that :

The master that the syndic connects to sees the syndic as an ordinary minion, and
treats it as such. the higher level master will need to accept the syndic’s minion key
like any other minion. This master will also need to set the order_masters value in the
configuration to True. The order_masters option in the configuration on the higher level
master is very important, to control a syndic extra information needs to be sent with
the publications, the order_masters option makes sure that the extra data is sent out.

A SaltStack topology using salt-syndic

Key Management with salt-key


salt-key is the system responsible for managing a public key server used for authentication.
24
http://docs.saltstack.com
Basic Concepts 45

Example: A salt-master communicates with a salt-minion only when this key system allows it. The
illustration below shows how a salt-minion can be accepted through Salt process.

An accepted salt-minion

A salt-minion is not always accepted by default, it may be also rejected or waiting.


Basic Concepts 46

3 possible statuses for a salt-minion

salt-key is launched using the following command:

1 salt-key [options]

The following options can help you to administer salt-key :

1 --version
2 # Print the version of Salt that is running.
3 --versions-report
4 # Show dependencies and version number
5 -h, --help
6 # Show the help message
7 -c CONFIG_DIR, --config-dir=CONFIG_dir
8 # The location of the Salt configuration directory. This directory contains the \
9 configuration files for Salt master and minions. The default location is /etc/sa\
10 lt.
11 -q, --quiet
12 # Suppress output
Basic Concepts 47

13 -y, --yes
14 # Answer 'Yes' to all questions presented, defaults to False
15 --log-file=LOG_FILE
16 # Log file path. Default: /var/log/salt/minion.
17 --log-file-level=LOG_LEVEL_LOGFILE
18 # Logfile logging log level. One of all, garbage, trace, debug, info, warning, e\
19 rror, quiet.
20 # Default: warning.
21 --out
22 # Pass in an alternative outputter to display the return of data.
23 # This outputter can be any of the available outputters: grains, highstate, json\
24 , key, overstatestage, pprint, raw, txt, yaml .
25 # Some outputters are formatted only for data returned from specific functions; \
26 for instance, the grains outputter will not work for non-grains data.
27 # If an outputter is used that does not support the data passed into it, then Sa\
28 lt will fall back on the pprint outputter and display the return data using the \
29 Python pprint standard library module.
30 --out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT
31 # Print the output indented by the provided value in spaces. Negative values dis\
32 able indentation. Only applicable in outputters that support indentation.
33 --out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE
34 # Write the output to the specified file.
35 --no-color
36 # Disable all colored output
37 --force-color
38 # Force colored output
39 -l ARG, --list=ARG
40 # List the public keys. The args pre, un, and unaccepted will list unaccepted/un\
41 signed keys. acc or accepted will list accepted/signed keys. rej or rejected wil\
42 l list rejected keys. Finally, all will list all keys.
43 -L, --list-all
44 # List all public keys. (Deprecated: use --list all)
45 -a ACCEPT, --accept=ACCEPT
46 # Accept the specified public key (use --include-all to match rejected keys in a\
47 ddition to pending keys). Globs are supported.
48 -A, --accept-all
49 # Accepts all pending keys.
50 -r REJECT, --reject=REJECT
51 # Reject the specified public key (use --include-all to match accepted keys in a\
52 ddition to pending keys). Globs are supported.
53 -R, --reject-all
54 # Rejects all pending keys.
Basic Concepts 48

55 --include-all
56 # Include non-pending keys when accepting/rejecting.
57 -p PRINT, --print=PRINT
58 # Print the specified public key.
59 -P, --print-all
60 # Print all public keys
61 -d DELETE, --delete=DELETE
62 # Delete the specified key. Globs are supported.
63 -D, --delete-all
64 # Delete all keys.
65 -f FINGER, --finger=FINGER
66 # Print the specified key's fingerprint.
67 -F, --finger-all
68 # Print all keys' fingerprints.
69 --gen-keys=GEN_KEYS
70 # Set a name to generate a keypair for use with Salt
71 --keysize=KEYSIZE
72 # Set the keysize for the generated key, only works with the '--gen-keys' option\
73 , the key size must be 2048 or higher, otherwise it will be rounded up to 2048. \
74 The default is 2048.

Some of those commands (available on the official SaltStack documentation) will be used in the next
chapter.

There will be no possible communication between a master and a minion unless the salt-key
system allows it. Be sure that all minions are accepted, before starting anything else.

Salt Runners Interface: salt-run


Consider salt-run as your interface to use Salt Runners. These are executable modules implemented
in SaltStack. Using the runner is simple as:

1 salt-run RUNNER

A good example of using this is when you want to list your minions’ states (up or down).
Type the next command to show all minions and the status of everyone.
Basic Concepts 49

1 salt-run manage.status

Showing only all minions with “up” status:

1 salt-run manage.up

And for showing all minions with “down” status:

1 salt-run manage.down

Another useful usage is when you want to clear Salt cache (this could be helpful also for
debugging/troubleshooting).

1 salt-run cache.clear_all

This is the official list of SaltStack Runners:


Function Description
cache Return cached data from minions
cloud The Salt Cloud Runner
doc A runner module to collect and display the inline documentation from
the
drac Manage Dell DRAC from the Master
error Error generator to enable integration testing of salt runner error
handling
f5 Runner to provide F5 Load Balancer functionality
fileserver Directly manage the Salt fileserver plugins
git_pillar Directly manage the salt git_pillar plugin
http Module for making various web calls.
jobs A convenience system to manage jobs, both active and already run
launchd Manage launchd plist files
lxc Control Linux Containers via Salt
manage General management functions for salt, tools like seeing what hosts
are up
mine A runner to access data from the salt mine
nacl This runner helps create encrypted passwords that can be included in
pillars.
network Network tools to run from the Master
pagerduty Runner Module for Firing Events via PagerDuty
pillar Functions to interact with the pillar compiler on the master
queue General management and processing of queues.
sdb Runner for setting and querying data via the sdb API on the master
search Runner frontend to search system
Basic Concepts 50

Function Description
state Execute overstate functions
survey A general map/reduce style salt runner for aggregating results
returned by several different minions.
test This runner is used only for test purposes and servers no production
purpose
thin The thin runner is used to manage the salt thin systems.
virt Control virtual machines via Salt
winrepo Runner to manage Windows software repo

The Data Aspect of Salt: Grains


“Salt grains of data” is the best way to describe the main role of Salt grains.

The grains module represents the “data” aspect of SaltStack.


When minion daemon starts, it loads the data from Salt grains. Grains stored data is static. Salt
grains can indicate various information such as the operating system of a minion, the kernel or the
disk size … Etc.
Just after the installation, the configuration and the start-up of a salt-minion, we have the possibility
to directly manipulate this module and handle the available data.
A Salt data is a pair of a key and its value. The next command shows how to add a key and assign
its value:

1 salt '*' grains.setval <key> <value>

Example : We want to add the key env_name (for environment name) and give it the value of prod
(for production).

1 salt '*' grains.setval env_name prod

This will add a the key/value to grains configuration file. We can do a test :

1 salt '*' grains.get <key>

and for the last example:


Basic Concepts 51

1 salt '*' grains.get env_name

You will see the ‘env_name’ value printed out:

1 mynode:
2 - integ

If you need to massively add multiple grains:

1 salt '*' grains.setvals "{'key1':'value1','key2':'value2','key3':'value3'}"

We can also delete a key therefore its value:

1 salt '*' grains.delval <key>

To delete ‘env_name’ :

1 salt '*' grains.delval env_name

Salt can also handle dictionaries and lists.

Dictionaries and lists in Python are powerful and simple data structures. Python offers
multiple functionalities to easily manage them.
Creating and adding elements to a list is simple :

1 salt '*' grains.append list1 element_a;


2 salt '*' grains.append list1 element_b;
3 salt '*' grains.append list1 element_c;
4
5 salt '*' grains.append list2 element_d;
6 salt '*' grains.append list2 element_e;
7 salt '*' grains.append list2 element_f;

grains.append is like Python’s list.append will create a list if it does not exist.
What we have added, could be viewed using grains.get:
Basic Concepts 52

1 salt '*' grains.get list1


2 #The output
3 mynode:
4 - element_a
5 - element_b
6 - element_c

To remove an element from the list :

1 salt '*' grains.remove list value_c


2
3 #The output
4 mynode:
5 list1:
6 - value_a
7 - value_b

Creating a dictionary of data can be done using setval:

1 salt '*' grains.setval key "{'subkey_a':'value_a','subkey_b':'value_b'}"


2
3 #The output
4 mynode:
5 key:
6 subkey_a:
7 value_a
8 subkey_b:
9 value_b

Once all of those elements were added, we can now check all of the grains we have on our minion.
Remember that there every minion has already default grains (os, cpu architecture ..etc).
To view all of them, just type:

1 salt '*' grains.ls

This will print a list of keys that the minion knows their values.
Example:
Basic Concepts 53

1 mynode:
2 - biosreleasedate
3 - biosversion
4 - cpu_flags
5 - cpu_model
6 - cpuarch
7 - defaultencoding
8 - defaultlanguage
9 - domain
10 - fqdn
11 - fqdn_ip4
12 - fqdn_ip6
13 - gpus
14 - host
15 - id
16 - ip_interfaces
17 - ipv4
18 - ipv6
19 - kernel
20 - kernelrelease
21 - localhost
22 - manufacturer
23 - master
24 - mem_total
25 - nodename
26 - num_cpus
27 - num_gpus
28 - os
29 - os_family
30 - osarch
31 - oscodename
32 - osfinger
33 - osfullname
34 - osmajorrelease
35 - osrelease
36 - path
37 - productname
38 - ps
39 - pythonpath
40 - pythonversion
41 - saltpath
42 - saltversion
Basic Concepts 54

43 - saltversioninfo
44 - serialnumber
45 - server_id
46 - shell
47 - virtual

All of the above output are the list of keys, to print out their values, you should type:

1 salt '*' grains.items

Here is an example of all the grains (keys, values) that the minion mynode has:

1 mynode
2 biosreleasedate: 03/05/2009
3 biosversion: VirtualBox
4 cpu_flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat p\
5 se36 clflush mmx fxsr sse sse2 syscall nx rdtscp lm constant_tsc up rep_good pni\
6 monitor ssse3 lahf_lm
7 cpu_model: Intel(R) Celeron(R) CPU G530 @ 2.40GHz
8 cpuarch: x86_64
9 defaultencoding: UTF8
10 defaultlanguage: en_US
11 domain: localdomain
12 fqdn: mynode
13 fqdn_ip4:
14 fqdn_ip6:
15 gpus:
16 model:
17 VirtualBox Graphics Adapter
18 vendor:
19 unknown
20 host: mynode
21 id: mynode
22 ip_interfaces: {'lo': ['127.0.0.1'], 'eth0': ['10.0.0.1']}
23 ipv4:
24 10.0.0.1
25 127.0.0.1
26 ipv6:
27 ::1
28 fe88::a00:72ff:fee8:64b
29 kernel: Linux
30 kernelrelease: 2.6.32-431.1.2.el6.x86_64
Basic Concepts 55

31 localhost: mynode
32 master: 10.0.254.254
33 mem_total: 1500
34 nodename: mynode
35 num_cpus: 1
36 num_gpus: 1
37 os: ScientificLinux
38 os_family: RedHat
39 osarch: x86_64
40 oscodename: Carbon
41 osfinger: Scientific Linux-6
42 osfullname: Scientific Linux
43 osmajorrelease:
44 6
45 4
46 osrelease: 6.4
47 path: /sbin:/usr/sbin:/bin:/usr/bin
48 productname: VirtualBox
49 ps: ps -efH
50 pythonpath:
51 /usr/bin
52 /usr/lib64/python26.zip
53 /usr/lib64/python2.6
54 /usr/lib64/python2.6/plat-linux2
55 /usr/lib64/python2.6/lib-tk
56 /usr/lib64/python2.6/lib-old
57 /usr/lib64/python2.6/lib-dynload
58 /usr/lib64/python2.6/site-packages
59 /usr/lib/python2.6/site-packages
60 /usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info
61 pythonversion: 2.6.6.final.0
62 saltpath: /usr/lib/python2.6/site-packages/salt
63 saltversion: 0.17.4
64 saltversioninfo:
65 0
66 17
67 4
68 serialnumber: 0
69 server_id: 332009357
70 shell: /bin/sh
71 virtual: VirtualBox

As you can see, salt-minion knows a lot of information about the system and the environment on
Basic Concepts 56

which it is installed, like Python version, the serial number or IP addresses of different machine’s
interfaces.
If you are interested in viewing only the machine’s interfaces, use ip_interfaces (that figure in
grains.item output ) like this:

1 salt '*' grains.get ip_interfaces

This will give you only the IP interface data. Only requested data will be printed out:

1 ip_interfaces:
2 eth0:
3 - 10.0.0.1
4 lo:
5 - 127.0.0.1

It is also possible to refine your output like getting only the IP of eth0:

1 salt '*' grains.get ip_interfaces:eth0

The IP 10.0.0.1 will be displayed as a result to your request.


Another feature, is combining two requests, like getting kernel and ip_interfaces information:

1 salt '*' grains.item ip_interfaces kernel

Result:

1 mynode:
2 ip_interfaces:
3 eth0:
4 - 10.0.0.1
5 lo:
6 - 127.0.0.1
7 kernel:
8 Linux

Adding, modifying or deleting grains can be made directly on the default configuration file:
Basic Concepts 57

1 /etc/salt/grains

The configuration file is written in YAML. Neglecting standards compliance may lead to
some problems.
You can add strings, lists or dictionaries. Open the grains file or the configuration file with you
favorite editor and add :

1 key: value

The last element could be a list:

1 my_list:
2 - my_first_value
3 - my_second_value

It could be a dictionary:

1 my_dict:
2 my_first_key: my_first_value
3 my_second_key: my_second_value

By using YAML syntax you can create complex data structures with list and dict and of course int,
string and boolean types ..etc
Here is an example :

1 include:
2 - apache
3
4 mod-php:
5 pkg.installed:
6 - name: libapache2-mod-php
7 - require:
8 - pkg: apache
9
10 a2enmod php:
11 cmd.run:
12 - unless: ls /etc/apache2/mods-enabled/php.load
13 - require:
Basic Concepts 58

14 - pkg: mod-php5
15 - watch_in:
16 - module: apache-restart
17
18 /etc/php/apache2/php.ini:
19 file.managed:
20 - source: salt://prog/php_ini
21
22 service.running:
23 - name: {{ apache.service }}
24 - enable: True

Notice that Jinja syntax is used near the name of the running service. We are going to use
it again in many of the following examples, if you are not familiar with it, normally following the
examples will be enough to understand its basics. If it is not the case, please refer to the official
documentation25 . If you want to master the use of Jinja, I recommend using it with Flask26

Targeting grains
In the previous examples we used SaltStack commands with the asterisk ‘ * ‘. The ‘*’ replaces all
machines or minions that salt-master has in its base. It is possible to perform a targeting minions on
which commands will be executed Salt.
When we type the next command:

1 salt '*' grains.item kernel

SaltStack, more precisely the ‘salt-master’ will get the kernel name for all of the minion that it knows,
in other words, all of accepted salt-minions, which means minions that will respond with True when
this command is executed:

1 salt '*' test.ping

In all of the last example, since we are using one master and one minion (mynode), executing the
last command will have the next output:

25
http://jinja.pocoo.org/docs/dev/
26
http://flask.pocoo.org/
Basic Concepts 59

1 mynode:
2 kernel:
3 Linux

Once another minion is added (following the same configuration and process as the first minion),
the output will be different and will show other machines.

1 mynode:
2 kernel:
3 Linux
4
5 another_node:
6 kernel:
7 Darwin

In this section, we call this process : targeting.


SaltStack offers many other targeting options; apart from using ‘*’ or the name of the minion instead:

1 salt '*' test.ping


2 salt mynode test.ping
3 salt another_node test.ping

You can also target minions having specific properties (grains data), like OS architecture; OS family;
kernel name or simply the name of the minion..etc

1 #Targeting minions having i368 processor


2 salt -G 'osarch:i386' test.ping
3
4 #Targeting Debian machines
5 salt -G 'os_family:Debian' test.ping
6 #Or
7 salt -G 'os_family:Deb*' test.ping
8
9 #Targeting *mynode* and 'another_node' nodes
10 salt '*_node' test.ping
11
12 #Targeting *mynode*
13 salt 'm?_node' test.ping
14 #Or
15 salt '?y_node' test.ping
16 # ..etc
Basic Concepts 60

17 # If we had the following nodes : 'mynode_1', 'mynode_2' and 'mynode_3', to targ\


18 et them we may use:
19 salt 'mynode_[1,3]' test.ping

Every SysAdmin knows that regular expressions (Regex) are necessary to “survive some
situations”. When it comes to targeting, regular expressions could be helpful. If you are not familiar
with Regex, it is important to begin learn them. Note that SaltStack is using PCRE : Perl Compatible
Regular Expressions.
Other types of targeting, like compound matchers, are provided.
The following table explains well the list of compound matchers:

Letter Targeting Type Example


G Grains glob G@os:Debian
E PCRE Minion ID E@webapp\d+.(dev|staging|prod)
P Grains PCRE P@os:(RedHat|Ubuntu|CentOS)
L List of minions [email protected],another.node.example.com
I Pillar glob I@pdata:foobar
S Subnet/IP address [email protected]/24 or [email protected]
R Range cluster R@%foo.bar

Here are some additional examples:

1 # "-C" is used with compound matchers


2 #
3 #Targeting all minions having amd64 architecture
4 salt -C '* and not G@osarch:amd64 test.ping'
5 #Targeting all minions except "mynode"
6 salt -C '* and not mynode' test.ping
7 #Targeting "mynode" and all Debian machines or "my_minion_\d.*"
8 salt -C 'mynode and G@os:Debian or E@my_minion_\d.*' test.ping

Note that all of those targeting expressions and methods could be used in the ‘top.sls’ file.
Top file will be explained hereinafter.

Extending The Minion Features With saltutil


SaltStack has implemented extended features on minions as standalone operations, the ability to
self-configure, the self-management of its modules to automate updates. That’s why a minion does
Basic Concepts 61

not always need a master.


Among the features saltutil allows, purging cache:

1 salt '*' saltutil.clear_cache

Or executing a command on the salt-minion like it was executed on a master :

1 salt '*' saltutil.cmd

Formating Outputs with outputter


Salt uses a standard display format to display a list of keys or a configuration of grains. Outputter
feature allows you to format the output produced by a Salt command.
To re-format the output produced by the next command :

1 salt '*' grains.item ip_interfaces kernel

we just need to add “–out=<format>” or “–output=<format>” in the end of the command:

1 salt '*' grains.item ip_interfaces kernel --out=json

The output will be displayed in JSON format:

1 {
2 "mynode": {
3 "kernel": "Linux",
4 "ip_interfaces": {
5 "lo": [
6 "127.0.0.1"
7 ],
8 "eth0": [
9 "10.0.0.1"
10 ]
11 }
12 }
13 }
Basic Concepts 62

Other displaying formats could be used, according official documentation, Salt uses this list of output
modules:
grains
Special outputter for grains.
highstate
Outputter for displaying results of state runs. The return data from the highstate command is a
standard data structure which is parsed by the highstate outputter to deliver a clean and readable
set of information about the highState run on minions.
json_out
Display return data in JSON format. The output format can be configured in two ways: Using the
–out-indent CLI flag and specifying a positive integer or a negative integer to group JSON from
each minion to a single line.
key
Display salt-key output. The salt-key command makes use of this outputter to format its output.
nested
Recursively display nested data. This is the default outputter for most execution functions.
newline_values_only
Display values only, separated by newlines.
no_out
Display no output.
no_return
Display output for minions that did not return. This outputter is used to display notices about which
minions failed to return when a salt function is run with -v or –verbose.
overstatestage
Display clean output of an overstate stage. This outputter is used to display overState stages, and
should not be called directly.
pprint_out
Python pretty-print (pprint). The python pretty-print system was once the default outputter.
raw
Display raw output data structure. This outputter simply displays the output as a Python data
structure, by printing a string representation of it.
txt
Basic Concepts 63

Simple text outputter. The txt outputter has been developed to make the output from shell commands
on minions appear as they do when the command is executed on the minion.
virt_query
virt.query outputter
yaml_out
Display return data in YAML format. This outputter defaults to printing in YAML block mode for
better readability.

Describing Configurations and States


A Salt “state” is a pre-configured state of an operation/task supported by Salt (like a configuration).
Generally noted SLS, this term can also refer to SLS files.
A Salt state is based on descriptive files (SLS) containing information and structured data. These
files are by default written in YAML and containing data structures like dictionaries, lists, character
strings or numbers.
SLS files can be assigned to hosts using the top.sls file.

The Top File


The role of this file is to map the SLS modules (Salt states) to minions. Without it a salt-master will
not know which state configuration to apply on a given minion.
As we have already set it in the master configuration file, the variable file_roots contains:

1 file_roots:
2 base:
3 - /srv/salt/

When using Salt, some rules should be respected, like the obligation to set a top.sls file in the base
environment. This file must be created on:

1 /srv/salt/top.sls

Why?
When using Salt with the default configuration, a single environment called base is set up by default:
Basic Concepts 64

1 #The master configuration file


2 file_roots:
3 base:
4 - /srv/salt

That’s why the top.sls file should be under the base environment “file_roots”:

1 /srv/salt/

The top.sls will contain something similar to:

1 base:
2 '*':
3 - <SLS_name>

Next is an example of an SLS file:

1 /srv/salt/vim.sls

If we want to assign the latter to all minions (‘’) in the *base environment, we need to have a top.sls
file containing:

1 base:
2 '*':
3 - vim

Note that you don’t need to write the whole name of the SLS file, writing “vim” instead of
vim.sls is sufficient.
When changing the path of vim.sls, think of changing it in the top.sls file also.
If the vim.sls path is:

1 /srv/salt/salt_vim/vim.sls

The top.sls file should be modified:


Basic Concepts 65

1 base:
2 '*':
3 - salt_vim/vim

Since our base environment is /srv/salt we don’t need to type all of the path of vim.sls
which is /srv/salt/salt_vim/vims.sls. You can just use salt_vim/vim, cf. line 3 in the last example.
The content of top.sls file depends on the declaration of the environments in the salt-master
configuration file.
If you want to declare other environments different from the base environment like in the next
example:

1 file_roots:
2 base:
3 - /srv/salt/base
4 development:
5 - /srv/salt/development
6 pre-production:
7 - /srv/salt/pre-production
8 production:
9 - /srv/salt/production

the top.sls file should reference all of the different declared environments:

1 base:
2 '*':
3 - vim
4
5 development:
6 '*dev*':
7 - dev_sls
8
9 pre-production:
10 '*preprod*':
11 - prepprod_sls
12
13 production:
14 '*production*':
15 - prod_sls

The top.sls file will be called during the execution of a “highstate” and this is by the way its most
important purpose.
Basic Concepts 66

1 salt '*' state.highstate

Salt will recursively look for top.sls files in the folder representing the base environment and any
other folders representing the other different environments.
In the last example, the file vim.sls will be applied to all minions. The file dev_sls.sls will be applied
to minions having an id containing preprod and so on ..
We could also use the compound matchers in a top.sls file:

1 base:
2 'mynode* and G@os:CentOS or E@myminion.*':
3 - match: compound
4 - vim

Using Master’s Pillars


Pillars are tree-structured data defined on the master and returned to the minions. They allow, for
example, storing confidential data securely to be sent only to the targeted minions.
Grains are an innate property of a minion, while the data in pillars are acquired by a minion that
a master target: This is the fundamental difference since pillars are defined in the master (in SLS
format).
Examples:

• Authentication login/password (database, CMS ..etc)


• API secret key
• A private key

Like grains, viewing the list of available pillars could be done using:

1 salt '*' pillar.items

pillars are based on environments (declared in the master configuration file):

1 /etc/salt/master

Consider that we are going to keep the default base environment:


Basic Concepts 67

1 #pillar_roots:
2 #base:
3 # - /srv/pillar

We should create the folder pillars:

1 mkdir /srv/pillar

then a top.sls file :

1 touch /srv/pillar/top.sls

We are going to create a file where we will store some sensible data (mysql password as an example):

1 touch /srv/pillar/passwords.sls

In the case where the password will be used by all of available minions, the top.sls file will contain:

1 base:
2 '*':
3 - passwords

Notice the use of ‘’ to tell Salt that the password will be used by all *minions. Notice also
that we are not obliged to write password.sls. Salt will recognize that passwords make reference to
password.sls file.
For now, let’s add the password of MySql to the passwords.sls file:

1 mysql_password: aZ?e14E373F7123$aBP

After configuring and storing data, it is recommended to make a refresh for pillars using:

1 salt '*' saltutil.refresh_pillar

MySql password will be shown on your screen if you enter:


Basic Concepts 68

1 salt '*' pillar.items

Generally, pillars are intended to be used in SLS files (Jinja) and can be called using different ways.
Simple structures:

1 {{ pillar['key'] }}

Nested structures:

1 {{ pillar['key_a']['key_b'] }}

Conditionally, if “key_a:key_b” is not configured (no value returned), the “value_c” will be used:

1 {{ salt['pillar.get']('key_a:key_b', 'value') }}

To reiterate the different elements of a structure:

1 {{ pillar.get('key', {}).items() }}

Note that pillars data could be configured using CLI:

1 salt '*' state.highstate pillar='{"key": "value"}'

For a particular file:

1 salt '*' state.sls my_sls_file pillar='{"key": "value"}'

The configured pillars may also be used to target minions:

1 salt -I 'key:value' test.ping

Remote Execution
Remote execution is one the most important SaltStack features. Using CLI, you could use the different
matchers to to filter minions:
Basic Concepts 69

1 salt '*' test.ping


2 salt -G 'os:Debian' test.ping
3 salt -E 'integ_[a-z]' test.ping
4 salt -L 'prod,web,indus,integ' test.ping
5 salt -C 'G@os:Redhat and prod* or E@integ_[a-z]' test.ping

Basically, the execution of a distant command is done like this:

1 salt '*' cmd.run '<command to run>'

Example:

1 salt '*' cmd.run 'uname -a'

Remote execution can be also used in SLS files. The following example launch two remote controls.
The second command (echo) only starts when the first is executed.

1 run_script:
2 cmd.run:
3 - name: /path/to/myscript
4 - cwd: /etc
5 - stateful: True
6
7 run_second_script:
8 cmd.wait:
9 - name: echo 'Finished running script'
10 - cwd: /etc
11 - watch:
12 - cmd: run_script

Event-Based Execution With Reactors


SaltStack uses a system of “reactors” since its version 0.11.0, the goal is to run commands triggered
by an event. This system allows you to link an SLS file to an “event tag” on the master.
The event system is based on ZeroMQ that triggers events in a bus. The latter is used to send
information about operations executions.
Each event has a “tag” and data structure (a dictionary containing event information). The definition
of “reactors” must appear on the master configuration file.
Example :
Basic Concepts 70

1 reactor:
2 - 'salt/minion/*/start':
3 - /srv/reactor/sync_grains.sls
4 - /srv/reactor/mysql.sls

In the last example, when the tag

1 salt/minion/*/start

is activated, the “reactor” system will trigger the activation of the different SLS files:

1 - /srv/reactor/sync_grains.sls
2 - /srv/reactor/mysql.sls

The role of the first file (sync_grains.sls) is to synchronize the grains at salt-minion start-up. This is
its content:

1 sync_grains:
2 local.saltutil.sync_grains:
3 - tgt: {{ data['id'] }}

The role of the second file (mysql.sls) is to apply state.highstate to mysql_minion .


And this is its content:

1 {% if data['id'] == mysql_minion %}
2 highstate_run:
3 local.state.highstate:
4 - tgt: mysql_minion
5 {% endif %}

The system of reactors enables management of multiple types of events associated with:

• Authentication
• Minion service upstart
• Key management
• Jobs management
• Cloud Management

The list below lists some “event tag”:


Basic Concepts 71

• when a minion performs an authentication check with the master.


– salt/auth
• when a minion connects to the Salt master.
– salt/minion/*/start
• when accepting and rejecting minions keys on the Salt master.
– salt/key
• when a new job is sent out to minions.
– salt/job/*/new
• when a minion returns data for a job.
– salt/job//ret/
• when setting schedule.
– salt/presence/present
• when the presence system detects new minions connect or disconnect.
– salt/presence/change
• when salt-cloud starts the VM creation process.
– salt/cloud/*/creating
• when the VM is available and salt-cloud begins deploying Salt to the new VM.
– salt/cloud/*/deploying
• when salt-cloud sends the request to create a new VM.
– salt/cloud/*/requesting

You can find the rest of the list on the official documentation of SaltStack27

From States To Formulas


When you write a salt-state in order to use it later, you had actually wrote a ‘formula’. A Salt formula
is a pre-written salt states that can be used mainly to run common tasks like installing/configuring
packages and starting services or configuring users/groups/permissions ..etc.
You can find the official Salt ‘formulas’ in a git repository28 (saltstack-formulas).
Salt ‘formulas’ makes life easier, since the repository is maintained by a community of developers
working on many different platforms formulas. To install Tomcat/Apache or MongoDB, just
reference the formula in the top.sls file and configure the targeted minions and environment.
27
http://docs.saltstack.com/en/latest/topics/event/master_events.html
28
https://github.com/saltstack-formulas
Basic Concepts 72

A central col-
lection of formula repositories for SaltStack.
Here is a list of some useful Salt formulas that you can find in the official git repository:

• mysql-formula (Install the MySQL client and/or server)


• salt-formula (Yes, Salt can Salt itself!)
• maven-formula (A minimalistic way to install just the maven distribution without any
dependencies)
• apache-formula (Set up and configure the Apache HTTP server)
• docker-formula (Install and set up Docker)
• sensu-formula (SaltStack formula to manage Sensu)
• redis-formula (Redis state)
• epel-formula (Install the EPEL RPM and GPG key)
• openvpn-formula (Setup and configure openvpn server and client.)
• iptables-formula
• sysstat-formula
• sysctl-formula
• hadoop-formula
• mongodb-formula
• circus-formula
• squid-formula
• jenkins-formula
• snmp-formula
• pam-ldap-formula
• ..etc
Basic Concepts 73

SaltStack And Vagrant


Between development and live (production) environments, some problems may happens, most of
them are due to the difference between the two environments. “Well, it works on my machine”, this
is a common sentence that both Dev and Ops guys could use it as an excuse. Even if it could be true
- in some cases, a role of a DevOps guy is to take this into consideration and provide the necessary
techniques and technologies to solve this kind of problems.
Vagrant seems to be a good solution to implement in the pipeline - the bridge connecting Dev and
Ops, since it allows developers to have an isolated development environment that could be similar
to the production one.
In this part of the book, I am assuming that you have some basic knowledge about Vagrant.
If not, the following introduction will help you to be more familiar with this software but not an
expert :-)
A simple definition to start: Vagrant is a software that let you create and configure virtual
environments for development purposes (may be other purposes also, this depends on the context
of your product and enterprise).
Vagrant is an open-source (MIT) tool developed by Mitchell Hashimoto and John Bender. It manages
virtual machines hosted in Oracle VirtualBox. It has its command-line tool and it uses Ruby (a
dynamically typed object-oriented scripting language).

Why Vagrant
According to the official website29 , Vagrant is a tool for :

• Developers

If you are a developer, Vagrant will isolate dependencies and their configuration within
a single disposable, consistent environment, without sacrificing any of the tools you are
used to working with (editors, browsers, debuggers, etc.). Once you or someone else
creates a single Vagrantfile, you just need to Vagrant up and everything is installed
and configured for you to work. Other members of your team create their development
environments from the same configuration, so whether you are working on Linux, Mac
OS X, or Windows, all your team members are running code in the same environment,
against the same dependencies, all configured the same way. Say goodbye to “works on
my machine” bugs.

• Operations engineers
29
http://vagrantup.com
Basic Concepts 74

If you are an operations engineer, Vagrant gives you a disposable environment and
consistent workflow for developing and testing infrastructure management scripts.
You can quickly test things like shell scripts, Chef cookbooks, Puppet modules, and
more using local virtualization such as VirtualBox or VMware. Then, with the same
configuration, you can test these scripts on remote clouds such as AWS or RackSpace
with the same workflow. Ditch your custom scripts to recycle EC2 instances, stop
juggling SSH prompts to various machines, and start using Vagrant to bring sanity
to your life.

• Designers

If you are a designer, Vagrant will automatically set everything up that is required
for that web app in order for you to focus on doing what you do best: design. Once
> a developer configures Vagrant, you don’t need to worry about how to get that
application running ever again. No more bothering other developers to help you fix
your environment so you can test designs. Just check out the code, vagrant up, and
start designing.

Using SaltStack As Provisioner


The salt provisioner allows you to provision the guest VM using Salt states. The current state
of a Vagrant VM could be described using SaltStack (installed packages, running services and
configurations).
In general, there is no need for a master to provision the VM, a mastreless Salt is enough.
The Vagrant Salt provisioner allows you to provision the guest machine using Salt states: installing
and configuring packages, activating and running services and configuring development environ-
ments.
Using Vagrant for development environments could help both developers and ops having similar
environments between development and production, this can be done by using Salt, since both
states/configurations used to create production environments could be used to create development
environments.
Using both Salt and Vagrant helps automation teams to set up a continuous integration/delivery
pipeline. When using Vagrant, you will probably test it and build a basic machine and you will
surely need to manage building your virtual Vagrant machines, since configurations cloud change
from one team to another, from an application to another or even from a software release to another,
that’s why it using a configuration management like SaltStack is valuable.
In the next chapter, you are going to see some practical examples of using SaltStack as a provision
tool.
Basic Concepts 75

SaltStack And Docker


Containers are relatively new technologies. You have surely heard about containers technologies like
Docker and CoreOs and may be also other related technologies like Jubernetes and Docker Compose
(formerly Fig).
Even if some say that those technologies are not really production-ready yet given some issues
with file systems, speed, garbage collection and security, Docker is changing the Cloud Computing
industry: 5x growth in its adoption in the last 12 months (from August 2014 to August 2015) according
to DataDog.
Any way, Docker containers have similar resource isolation and allocation benefits as virtual
machines but a different architectural approach. The great benefit of using Docker is that it allows
you to write code once and run it anywhere. Apart from being lightweight, Docker is portable.
If you had already worked with SaltStack to manage Docker, you have probably worked with
dockerio module which is a built-in state module (salt.states.dockerio) just like salt.states.disk or
salt.states.apache.
This module requires docker-py (version >= 0.6.0), you can simply install the latest version by typing:

1 pip install docker-py

Once docker-py is installed, you can start working with dockerio module.

The problem with dockerio is that it is deprecated and since the version 2015.8.0, develop-
ment is only done on dockerng, so I highly recommend to learn dockerng rather than dockerio.
The module dockerng will be supported for the future and it allows you to create,build,manipulate
and orchestrate Docker images.
According to the official documentation of Salt:

We have received a lot of feedback on our Docker support. In the process of implement-
ing recommended improvements, it became obvious that major changes needed to be
made to the
functions and return data. In the end, a complete rewrite was done.

dockerng stands for Docker Next Generation.


Anyway you will still need docker-py:
Basic Concepts 76

1 pip install docker-py>=1.4.0

Or just use Salt to install docker-py on your minion:

1 salt mynode pip.install docker-py>=1.4.0

Salt Cloud
Salt Cloud is built-in to Salt and is configured on and executed from your Salt master.
Salt can provision systems on cloud hosts / hypervisors : - Public clouds : Google Cloud, Amazon
Web Services, Linode - Private clouds: Vmware Private Cloud, OpenStack, CloudStack
After provisioning cloud hosts, they are managed using Salt like every other host.
To manage Cloud hosts SaltStack uses Providers, Profiles and Maps.
When using salt-cloud, normally you should follow 4 steps:

• Look at the global Salt Cloud configuration


• Create your Salt Cloud Providers configuration files
• Create virtual machines Salt Cloud Profiles
• If needed, create your Salt Cloud Map files and start your virtual machines

Salt Cloud Installation


If you are using SaltStack 2014.1.0 or a superior version, Salt Cloud is a built-in module, otherwise
you should install salt-cloud with your package manager.
Example:

1 sudo apt-get install salt-cloud

I am supposing that you had already added Salt repository:

1 sudo add-apt-repository ppa:saltstack/salt


2 sudo apt-get update

If you do not have python-libcloud installed, you should also install it:
Basic Concepts 77

1 sudo apt-get install python-libcloud

If you are installing salt-cloud for development, make sure to install apache-libcloud:

1 sudo apt-get install apache-libcloud

Salt Cloud Configurations


The Salt Cloud configuration resides in:

1 /etc/salt/cloud

and of course the main Salt configuration file:

1 /etc/salt/master
2 /etc/salt/minion

We are going to detail the usage of Providers, Profiles and Maps in the next part of this section, but
the concepts are simple:

• Providers are for managing Cloud Providers


• Profiles are for configuring Cloud hosts
• Maps are used for mapping virtual machines to Profiles

The salt-cloud configuration file is similar to the following one:

1 # This file should normally be installed at: /etc/salt/cloud


2
3
4 ##########################################
5 ##### VM Defaults #####
6 ##########################################
7
8 # Set the size of minion keys to generate, defaults to 2048
9 #
10 #keysize: 2048
11
12
13 # Set the default os being deployed. This sets which deployment script to
14 # apply. This argument is optional.
Basic Concepts 78

15 #
16 #script: bootstrap-salt
17
18
19 ##########################################
20 ##### Logging Settings #####
21 ##########################################
22
23 # The location of the master log file
24 #
25 #log_file: /var/log/salt/cloud
26
27
28 # The level of messages to send to the console.
29 # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
30 #
31 # Default: 'info'
32 #
33 #log_level: info
34
35
36 # The level of messages to send to the log file.
37 # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
38 #
39 # Default: 'info'
40 #
41 #log_level_logfile: info
42
43
44 # The date and time format used in log messages. Allowed date/time formating
45 # can be seen here:
46 #
47 # http://docs.python.org/library/time.html#time.strftime
48 #
49 #log_datefmt: '%Y-%m-%d %H:%M:%S'
50
51
52 # The format of the console logging messages. Allowed formatting options can
53 # be seen here:
54 #
55 # http://docs.python.org/library/logging.html#logrecord-attributes
56 #
Basic Concepts 79

57 #log_fmt_console: '[%(levelname)-8s] %(message)s'


58 #log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(me\
59 ssage)s'
60
61
62 # Logger levels can be used to tweak specific loggers logging levels.
63 # For example, if you want to have the salt library at the 'warning' level,
64 # but you still wish to have 'salt.modules' at the 'debug' level:
65 #
66 # log_granular_levels:
67 # 'salt': 'warning',
68 # 'salt.modules': 'debug'
69 # 'saltcloud': 'info'
70 #
71 #log_granular_levels: {}
72
73
74 ##########################################
75 ##### Misc Defaults #####
76 ##########################################
77
78 # Whether or not to remove the accompanying SSH key from the known_hosts file
79 # when an instance is destroyed.
80 #
81 # Default: 'False'
82 #
83 #delete_sshkeys: False

The file is commented so no need to go through all definition but some important ones:

Salt Bootstrap

Salt Coud uses

1 salt-bootstrap.sh

and this script can be found in the official Github account:

1 https://github.com/saltstack/salt-bootstrap

The bootstrap script install dependencies and the salt-minion on the new Cloud virtual machine.
You can find this file here:
Basic Concepts 80

1 /usr/lib/python2.7/dist-packages/salt/cloud/deploy/bootstrap-salt.sh

And if you are been using Vagrant and Docker with SaltStack as provisioner, you could find the
bootstrap-salt.sh in other locations:

1 locate bootstrap-salt.sh
2
3 /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/plugins/provisioners/salt/bootstra\
4 p-salt.sh
5 /usr/lib/python2.7/dist-packages/salt/cloud/deploy/bootstrap-salt.sh
6 /var/lib/docker/aufs/diff/62f00290f429277acf1c5695a2e0321e8e96dfb18b50fc4451b111\
7 26f632045d/usr/lib/python2.7/dist-packages/salt/cloud/deploy/bootstrap-salt.sh

Salt Cloud Logging

I am explaining this part because I recommend activating logs the first time you will use Salt Cloud.
It will help you to keep traces and learn from them.
The master log file is located in

1 /var/log/salt/cloud

In the Salt Cloud configuration file, you can set the nature of logging, you have several choices:
garbage, trace, debug, info, warning, error or critical.
I recommend also to activate the debugging mode in the console output. Same thing you have to
choose the degree of the verbosity: garbage, trace, debug, info, warning, error or critical.
If you are familiar with Python and you want to change the format of the date when logging, you
must change this line:

1 log_datefmt: '%Y-%m-%d %H:%M:%S'

Same thing for the log format:

1 log_fmt_console: '[%(levelname)-8s] %(message)s'


2 log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(mes\
3 sage)s'

You can set up logging granularities like in the following example:


Basic Concepts 81

1 log_granular_levels:
2 'salt': 'debug',
3 'salt.modules': 'error'
4 'saltcloud': 'trace'
5
6 ### Salt Cloud Providers
7
8 Providers are configuration files where credentials for each used Cloud provider\
9 are provided (examples: ec2 configuration).
10 If you wan to use Digital Ocean, you should create a file in:
11
12 ``` bash
13 /etc/salt/cloud.providers.d/digital_ocean.conf

Then you should edit it in order to have a similar configuration to the next one:

1 do:
2 provider: digital_ocean
3 minion:
4 master: your_server_ip
5
6 # DigitalOcean Access Token
7 personal_access_token: your_access_token
8
9 # The name of your SSH key
10 ssh_key_name: salt-master-root-key
11
12 # The rivate key for your Digital Ocean account \
13
14 ssh_key_file: /root/.ssh/id_rsa

Note that the Digital Ocean example provided below could be different from AWS
configuration file. Every provider has its own configuration schema.

Salt Cloud Profiles


Profiles are system configurations like hardware, location, OS, file systems ..etc (VM profiles)
Salt Cloud profiles are configured in the file:
Basic Concepts 82

1 /etc/salt/cloud.profiles

You can also create your configuration files in:

1 /etc/salt/cloud.profiles.d/*conf

Here is a simple example for creating a virtual machine on RackSpace:

1 ubuntu_rackspace:
2
3 provider: rackspace
4 image: Ubuntu 14.04
5 size: 1024 server
6 script: salt-bootstrap
7 minion:
8 master: salt.example.com
9 append_domain: webs.example.com
10 grains:
11 role: webserver

And this an example for working with Azure Cloud:

1 azure-ubuntu:
2 provider: azure-west
3 image: 'Ubuntu-14_04-LTS-amd64-server-20140724-en-us-30GB'
4 ssh_username: user_name
5 ssh_password: user_password
6 ssh_pubkey: /root/azure.pub
7 media_link: 'https://example.blob.core.windows.net/vhds'
8 slot: production
9 size: Small
10 tty: True
11 sudo: True

Salt Cloud Maps


Map files have a simple format and they are useful at the same time. In map files, you can specify a
profile and a list of virtual machines to create from said profile.
Basic Concepts 83

1 ubuntu_small:
2 - web
3 - rethindb
4 ubuntu_medium:
5 - nginx
6 - mongodb
7 ubuntu_large:
8 - mysql
9 - mariadb

A map file can also include grains configuration options:

1 ubuntu_micro:
2 - web1:
3 grains:
4 grain_1: value_1
5 grain_2: value_2
6 - web2:
7 grains:
8 grain_1: value_1
9 grain_2: value_2

A map file can also include minion configuration options:

1 ubuntu_micro:
2 - web1:
3 minion:
4 log_level: error
5 - web2:
6 minion:
7 log_level: debug

and of course the map file can contains both of minions and grains options:
Basic Concepts 84

1 ubuntu_micro:
2 - web1:
3 minion:
4 log_level: error
5 grains:
6 grain_1: value_1
7 grain_2: value_2
8 - web2:
9 minion:
10 log_level: debug
11 grains:
12 grain_1: value_1
13 grain_2: value_2

Maps files can be saved in:

1 /etc/salt/cloud.maps.d/*.conf

You can create it if it does not exist:

1 mkdir -p /etc/salt/cloud.maps.d

Map files can be called to roll out virtual machines using salt-cloud command with the -m option:

1 salt-cloud -m /etc/salt/cloud.maps.d/my_map.conf

You can use -P to parallelize the creation of virtual machines:

1 salt-cloud -m /etc/salt/cloud.maps.d/my_map.conf -P

If you want to destroy all virtual machines that are not mentioned in the map file, use the -H option
(–hard):

1 salt-cloud -m /etc/salt/cloud.maps.d/my_map.conf -P -H

If you really want to use this dangerous option, you should activate it the cloud configuration file:

1 enable_hard_maps: True

Creating New Salt Master Cloud Hosts


Creating a salt-master in a Rackspace or AWS virtual machine cloud be done easily using Salt Cloud
maps. After setting up Cloud Providers and Profiles, you can use Cloud Maps to start new Salt
masters.
Basic Concepts 85

1 ubuntu_micro:
2 - vm_1:
3 make_master: True
4 - vm_2
5 make_master: True
6 - vm_3
7 make_master: True
8 - vm_4
9 make_master: True

In this example, you can notice that creating 4 Cloud virtual machines.

When using a Map file, all bootstrapped minions will answer to the newly created salt-
master.

To make any of the bootstrapped minions answer to the another salt-master (not the newly
created one), you should add this condition to the configuration file.
Here is a simple example where the created vm_3 minion responds to a different salt-master and
not to the vm_1 salt-master:

1 centos_small:
2 - vm_1:
3 make_master: True
4 - vm_2
5 - vm_3:
6 minion:
7 master: master_vm_3.example.com
8 local_master: True

Troubleshooting Salt Cloud


Salt Cloud is a built-in module in SaltStack, so basic troubleshooting is not different from
troubleshooting Salt. First thing that may come to your mind is activating debugging:

1 salt-cloud -p ubuntu_micro minion_server -l debug

Salt Cloud uploads some files that are specific to salt-minion on your cloud VM, for debugging and
troubleshooting reasons, it could be a good idea to keep them, because by default they are deleted.
Basic Concepts 86

1 salt-cloud -p ubuntu_micro minion_server --keep-tmp

You can find those file in the directory:

1 /tmp/.saltcloud/

Those temporary files are:

• An executable deploy.sh script


• .pem and .pub key that should be copied to /etc/salt/pki/minion/
• minion file that should be copied to the /etc/salt/
• grains file if you have used grains that should be copied to /etc/salt/

Changing the log level to be more verbose is also a good idea if you want to troubleshoot Salt, you
can update your configurations in Salt Cloud configuration file:

1 # The level of messages to send to the console.


2 # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
3 #
4 # Default: 'info'
5 #
6 #log_level: info
7
8
9 # The level of messages to send to the log file.
10 # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
11 #
12 # Default: 'info'
13 #
14 #log_level_logfile: info

Some problems may come from version compatibility as said in the official documentation:

Version Compatibility One of the most common issues that Salt Cloud users run into is
import errors. These are often caused by version compatibility issues with Salt.

Salt 0.16.x works with Salt Cloud 0.8.9 or greater.

Salt 0.17.x requires Salt Cloud 0.8.11.

Releases after 0.17.x (0.18 or greater) should not encounter issues as Salt Cloud has been
merged into Salt itself.

You cloud have problems due to the fact that the salt-cloud is not updated, in this case you can type:
Basic Concepts 87

1 salt-cloud -u

A successful update will show you a similar output to the following one:

1 Success:
2 ----------
3 Files updated:
4 - /etc/salt/cloud.deploy.d/bootstrap-salt.sh
5 - /usr/lib/python2.7/dist-packages/salt/cloud/deploy/bootstrap-salt.sh
Real World Examples

Introduction
One of the strengths of SaltStack is its richness and the large number of functionalities it offers. The
previous chapters have introduced the main concepts about Salt however the purpose of this book is
not to detail all about SaltStack but to bring you to the world of configuration management, remote
execution, infrastructure automation using the different Salt modules.
Knowledge is not enough and practicing is important. That’s why it is a good idea to practice, and
the next practical examples will help you.

Vagrant Quick-Start

Note that we are using Ubuntu 14.04 for the next tutorial.
To install and use Vagrant, first thing you need to do is installing VirtualBox:

1 sudo apt-get install virtualbox

Install Vagrant:

1 sudo apt-get install vagrant

And the VirtualBox host kernel modules:

1 sudo apt-get install virtualbox-dkms

Create a work directory:


Real World Examples 89

1 mkdir vagrantbox
2 cd vagrantbox

Create your first Vagrant machine:

1 vagrant box add precise32 http://files.vagrantup.com/precise32.box


2 vagrant init

Then edit the Vagrantfile in the same directory

1 config.vm.box = "precise32"

If you want to choose another OS/Distro or architecture, please visit this repository of Vagrant boxes.
Example:

1 config.vm.box = "precise32"

Launch the Vagrant virtual machine:

1 vagrant up

And start using it:

1 vagrant ssh

Creating A SaltStack Work Environments Using Vagrant


These instructions will help you set up a SaltStack test or development environment using Vagrant.
First, download and install Vagrant, go to the download page30 and choose your distribution.
We are using a Debian/Ubuntu distribution for the next instructions:

1 wget https://releases.hashicorp.com/vagrant/1.8.1/vagrant_1.8.1_x86_64.deb
2 dpkg -i vagrant_1.8.1_x86_64.deb

or use the second method:

30
https://www.vagrantup.com/downloads.html
Real World Examples 90

1 sudo apt-get install virtualbox


2 sudo apt-get install vagrant
3 apt-get install virtualbox-dkms

Now get a local copy from salt-vagrant-demo github:

1 git clone https://github.com/UtahDave/salt-vagrant-demo.git


2 cd salt-vagrant-demo
3 vagrant up

This will download an Ubuntu VirtualBox image and create three virtual machines for
you. One will be a salt-master named master and two will be salt-minion instances
named minion1 and minion2. Each salt-minion will point to master and the their
respective keys will already be accepted. Because the keys are pre-generated and reside
in the repository, please be sure to regenerate new keys if you use this for production
purposes.

Now let’s login into the master using ssh:

1 vagrant ssh master

Now if you type:

1 su
2 salt-key --list-all

You will find that you have 2 minions in the same Vagrant machine.
Let the master accept the two minions:

1 salt-key --accept-all

After accepting the two keys, you can verify by listing all minions:
Real World Examples 91

1 Accepted Keys:
2 minion1
3 minion2
4 Denied Keys:
5 Unaccepted Keys:
6 Rejected Keys:

To verify that everything was ok:

1 salt '*' test.ping


2 ``` bash
3
4 And you should see something like the following output:
5
6 ``` bash
7 minion2:
8 True
9 minion1:
10 True

Installation And Configuration Of Apache Web Server


One of the most important feature that Vagrant offers is its capability to interface with many
automation and configuration management tools such as Chef, Puppet and of course SaltStack. You
can of course use shell/bash/python scripts to provision your development machine.
In this part of “SaltStack For Devops”, we are more interested in using SaltStack standards as a
provisioning tool. The combination of Salt and Vagrant allows you to create a powerful automated
development environment.
You should have Vagrant installed before starting, then you need to install Salt plugin:

1 vagrant plugin install vagrant-salt

In this example we will use Ubuntu Precise 64 as a guest virtual machine:

1 mkdir salt-vagrant_example
2 cd salt-vagrant_example
3 vagrant box add precise64 http://files.vagrantup.com/precise64.box

In this example, we are going to setup an Apache web server.


Create the following directories and files:
Real World Examples 92

1 #The minion configuration file. Please copy the default minion configuration fil\
2 e from #/etc/salt/minion to your work directory.
3 salt/minion
4 #The minion grains
5 salt/minion.d/vagrant.conf
6 #The root folder where we will create the "top.sls" file and the other state fil\
7 es.
8 salt/roots
9 salt/roots/top.sls
10 salt/roots/apache/apache.sls

You workspace structure should look like this:

1 salt-vagrant_example
2 ├── salt
3 │ ├── minion
4 │ ├── minion.d
5 │ │ └── vagrant.conf
6 │ └── roots
7 │ ├── apache2
8 │ │ ├── apache2.conf
9 │ │ └── apache2.sls
10 │ ├── top.sls
11 └── Vagrantfile

Let’s take a look at Vagrant configuration file.


You will need to :

• Mount your Salt file root (synchronize salt/roots (in the host) to /srv/salt/ (in the guest). ).
• Tell Vagrant that we are using SaltStack as a provisioning automation tool.

When we say “host” and “guest”, we are referring respectively to your machine and to
Vagrant machine.
The Vagrant configuration file (Vagrantfile) will look like this:
Real World Examples 93

1 Vagrant.configure("2") do |config|
2 ## Choose your base box
3 config.vm.box = "precise64"
4
5 ## For masterless, mount your salt file root
6 config.vm.synced_folder "salt/roots/", "/srv/salt/"
7 config.vm.synced_folder "salt/minion.d/", "/etc/salt/minion.d/"
8
9 ## Use all the defaults:
10 config.vm.provision :salt do |salt|
11
12 salt.minion_config = "salt/minion"
13 salt.run_highstate = true
14 salt.grains_config = "salt/minion.d/vagrant.conf"
15
16 end
17 end

Let’s get back to Salt.


To create the minion configuration file, you would have made like this:

1 cp /etc/salt/minion ./salt/minion

In this example, you will be using Salt in its masterless mode, that’s why you need to update your
minion configuration:

1 file_client: local

The Apache installation SLS file should be called by your local top.sls file.

1 #Local top.sls file under salt/roots/top.sls


2 base:
3 '*':
4 - apache2/apache2

Now, let’s create the Apache SLS file (salt/roots/apache2/apache2.sls):


Real World Examples 94

1 # Install Apache
2 apache2:
3 pkg.installed:
4 - name: apache2
5 - file: apache2
6 # Start the Apache service
7 service.running:
8 - enable: True
9 - require:
10 - pkg: apache2
11 # Copy the configuration template to */etc/apache2/apache2.conf* and render grai\
12 ns values.
13 file.managed:
14 - name: /etc/apache2/apache2.conf
15 - source: salt://apache2/apache2.conf
16 - template: jinja
17 - require:
18 - pkg: apache2

And the configuration template (salt/roots/apache2/apache2.conf )

1 LockFile ${APACHE_LOCK_DIR}/accept.lock
2 PidFile ${APACHE_PID_FILE}
3 Timeout {{ grains['apache_timeout'] }}
4 KeepAlive {{ grains['keep_alive'] }}
5 MaxKeepAliveRequests {{ grains['MaxKeepAliveRequests'] }}
6 KeepAliveTimeout {{ grains['KeepAliveTimeout'] }}
7
8 <IfModule mpm_prefork_module>
9 StartServers 5
10 MinSpareServers 5
11 MaxSpareServers 10
12 MaxClients 150
13 MaxRequestsPerChild 0
14 </IfModule>
15
16 <IfModule mpm_worker_module>
17 StartServers 2
18 MinSpareThreads 25
19 MaxSpareThreads 75
20 ThreadLimit 64
21 ThreadsPerChild 25
Real World Examples 95

22 MaxClients 150
23 MaxRequestsPerChild 0
24 </IfModule>
25
26 <IfModule mpm_event_module>
27 StartServers 2
28 MinSpareThreads 25
29 MaxSpareThreads 75
30 ThreadLimit 64
31 ThreadsPerChild 25
32 MaxClients 150
33 MaxRequestsPerChild 0
34 </IfModule>
35
36 User www-data
37 Group www-data
38 AccessFileName .htaccess
39
40 <Files ~ "^\.ht">
41 Order allow,deny
42 Deny from all
43 Satisfy all
44 </Files>
45
46 DefaultType None
47 HostnameLookups Off
48 ErrorLog ${APACHE_LOG_DIR}/error.log
49 LogLevel warn
50 Include mods-enabled/*.load
51 Include mods-enabled/*.conf
52 Include ports.conf
53 LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" v\
54 host_combined
55 LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
56 LogFormat "%h %l %u %t \"%r\" %>s %O" common
57 LogFormat "%{Referer}i -> %U" referer
58 LogFormat "%{User-agent}i" agent
59 Include conf.d/
60 Include sites-enabled/

Note in the last configuration file, that we are using grains:


Real World Examples 96

1 Timeout {{ grains['apache_timeout'] }}
2 KeepAlive {{ grains['keep_alive'] }}
3 MaxKeepAliveRequests {{ grains['MaxKeepAliveRequests'] }}
4 KeepAliveTimeout {{ grains['KeepAliveTimeout'] }}

Those grains will be rendered using the following file:

1 salt/minion.d/vagrant.conf

Let’s see its content :

1 grains:
2
3 apache_timeout: 600
4 keep_alive: On
5 MaxKeepAliveRequests: 150
6 KeepAliveTimeout: 10

We are done with configurations.


Now bring you Vagrant machine up:

1 vagrant up

At this step, in the virtual machine, SaltStack (precisely salt-minion) will be installed and
configured (minion configuration file) and state files will be copied (into /srv/salt/ ).
Log in:

1 vagrant ssh

Now, you need to run SaltStack in your virtual machine in order to provision it :

1 vagrant@precise64$ sudo salt-call state.highstate

After executing state.highstate call, Apache will be installed and configured into your Vagrant VM.
The lat
Real World Examples 97

Creating You Own Private Storage Cloud (SaltStack +


Vagrant + OwnCloud)

Before moving to another example, please note that while writing this book, SaltStack was
not supported by Vagrant : The installation of Salty Vagrant was mandatory. Now there is no need
for this, SaltStack is provided directly by Vagrant. In the following tutorial, we are not using Salty
Vagrant anymore.
If you have the latest version of SaltStack, so it is sure that you will not need Salty Vagrant plugin. In
the next example, we are going to use SaltStack to automate the provisioning of a Vagrant machine.
The main goal is to create a development machine for OwnCloud.
First of all, type:

1 vagrant init

Apart from Vagrantfile, created automatically after typing the last command, create the other
necessary folders and files in order to have a similar tree:

1 ├── salt
2 │ ├── formulas
3 │ │ └── owncloud
4 │ ├── minion.yml
5 │ └── roots
6 │ └── top.sls
7 └── Vagrantfile

Go to [OwnCloud Formula] (https://github.com/saltstack-formulas/owncloud-formula/), get the


archive and extract it to

1 salt/formulas/owncloud/

Note that the tree should be like the next after extracting OwnCloud archive:
Real World Examples 98

1 ├── salt
2 │ ├── formulas
3 │ │ └── owncloud
4 │ │ ├── init.sls
5 │ │ ├── map.jinja
6 │ │ ├── mysql.sls
7 │ │ ├── python-mysqldb.sls
8 │ │ └── repo.sls
9 │ ├── minion.yml
10 │ └── roots
11 │ └── top.sls
12 └── Vagrantfile

Let configure Vagrant.


Put the following code in the Vagrantfile:

1 # -*- mode: ruby -*-


2 # vi: set ft=ruby :
3 Vagrant.configure(2) do |config|
4 config.vm.box = "ubuntu/trusty64"
5 # The port 80 in the guest system will be accessed using the port 4567
6 config.vm.network :forwarded_port, host: 4567, guest: 80
7 #Folders to synchronize between host and guest machines
8 config.vm.synced_folder "salt/roots/", "/srv/salt/"
9 config.vm.synced_folder "salt/formulas/", "/srv/formulas/"
10
11 #Salt Configuration
12 config.vm.provision :salt do |salt|
13 salt.verbose = true
14 salt.minion_config = "salt/minion.yml"
15 salt.run_highstate = true
16 salt.colorize = true
17 salt.log_level = 'all'
18 #OwnCloud Formula uses the next pillar pillar['owncloud']['owncloudpass'], which\
19 is the password of
20 salt.pillar({
21 "owncloud" => {
22 "owncloudpass" => "SetAPassword"
23 }
24 })
25 end
26
Real World Examples 99

27 config.vm.provider "virtualbox" do |v|


28 #You can give a name to your VM (should be unique)
29 v.name = "owncloud_dev"
30 end
31 end

In the minion configuration (minion.yml), we should configure the master to localhost and file_-
client to local:

1 master: localhost
2 file_client: local
3
4 file_roots:
5 base:
6 - /srv/salt
7 - /srv/formulas

And finally the tops.sls file:

1 base:
2 '*':
3 - owncloud

If everything is ok, you can start you VM:

1 vagrant up

You can find all of the code in a GitHub repository31 that I created for this reason.
Use it by executing these command:

31
https://github.com/eon01/OwnCloud_Vagrant_SaltStack
Real World Examples 100

1 #Clone the git


2 git clone https://github.com/eon01/owncloud_vagrant_saltstack.git
3
4 #Cd into the folder
5 cd owncloud_vagrant_saltstack
6
7 #'Wake up' your machine :-)
8 vagrant up

Scheduling Monitoring Tasks Using SaltStack


In this part, we are using the scheduling feature, the built-in module SALT.MODULES.SCHEDULE
will do the job for us after a simple configuration. Let’s test how it works.
We will add a scheduled job that will do a simple task (write “test” at the end of a file).
Let’s add the job, in our case it is the command:

1 echo "test" >> $HOME/temp

The first line of the next code will add a new job, the second part is the output (you should see
something similar to this wit “True” as “result”).

1 mynode ~ # salt '*' schedule.add test_job function='cmd.run' job_args="['echo \"\


2 test\" >> /home/user/temp']" seconds=1
3 mynode:
4 ----------
5 comment:
6 Added job: test_job to schedule.
7 result:
8 True

By default, the added job will not be enabled, we should add do this:
Real World Examples 101

1 mynode ~ # salt '*' schedule.enable_job test_job


2 mynode:
3 ----------
4 comment:
5 Enabled Job test_job in schedule.
6 result:
7 True

We can now see our job when we list all of the scheduled tasks:

1 mynode ~ # salt '*' schedule.list


2 mynode:
3 schedule:
4 enabled: false
5 test_job:
6 args:
7 - echo "test" >> /home/user/temp
8 enabled: true
9 function: cmd.run
10 jid_include: true
11 maxrunning: 1
12 name: test_job
13 seconds: 1

Now let’s enable the scheduling feature:

1 mynode ~ # salt '*' schedule.enable


2 mynode:
3 ----------
4 comment:
5 Enabled schedule on minion.
6 result:
7 True

A last check:
Real World Examples 102

1 mynode ~ # tail -f /home/user/temp


2 test
3 test
4 test
5 test
6 test
7 test
8 test
9 test
10 test
11 test
12 test
13 test
14 test
15 test
16 test
17 ^C

Our last test should show us that test is added at the end of the file temp every second: Success !
Now monitoring a website is not very different from what we have done in the last example. We are
just going to change the used command and capitalize on the scheduling feature because generally
monitoring needs repetitive checks.
What we are going to execute as a scheduled job is just an example. Let’s check it:

1 wget "www.eon01.com" --timeout 30 -O - 2>/dev/null | grep "dev & ops" || echo "T\
2 he site is down" | mail -s "Your site is down" [email protected]

The last command will send an email to [email protected] when the string dev & ops will not be
found: in other words, the site is down or not responding within 30 seconds (notice –timeout with
wget).
Let’s add this command the SaltStack schedule function:
Real World Examples 103

1 mynode ~ # salt '*' schedule.add sched_monitoring function='cmd.run' job_args="[\


2 'wget \"www.eon01.com\" --timeout 30 -O - 2>/dev/null | grep \"dev & ops\" || ec\
3 ho \"The site is down\" | mail -s \"Your site is down\" [email protected]']" seco\
4 nds=10
5 mynode:
6 ----------
7 comment:
8 Added job: sched_monitoring to schedule.
9 result:
10 True

1 mynode ~ # salt '*' schedule.enable_job sched_monitoring


2 mynode:
3 ----------
4 comment:
5 Enabled Job sched_monitoring in schedule.
6 result:
7 True

1 mynode ~ # salt '*' schedule.disable_job sched_monitoring


2 mynode:
3 ----------
4 comment:
5 Disabled Job sched_monitoring in schedule.
6 result:
7 True

Now, the minion will send an email if the site is down and the check will be done every 10 seconds.
As I said, this is an example, you can imagine many others, like monitoring disk with df, monitoring
memory with free .. etc
I recommend to do a double check on every scheduled task:

1 * check the used resource (memory, disk ..etc) used by the *minion* after config\
2 uring the scheduled task.
3 * check the log file in : */var/log/salt/minion*

SaltStack schedule module offers other possibilities like copying one job to another :
Real World Examples 104

1 salt '*' schedule.copy my_source_job my_target_job

Deleting jobs:

1 salt '*' schedule.delete my_job

Disabling a job:

1 salt '*' schedule.disable_job my_job

Moving a job from a minion to another minion(s):

1 salt '*' schedule.move my_job target

This part of SaltStack is well explained, so I recommend you to see the other commands in [the offi-
cial documentation (https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.schedule.html)

Automating Wordpress / LAMP Installation And


Configuration
We can use formulas to install Apache, Mysql and PHP. It’s the fastest way you could use in this
example. We have seen together how we could use this feature in OwnCloud example.

Apart from the advantage of the rapidity of installation and configuration provided by
formulas, they are also maintained by an active community32 .
If you want to put again into practice the acquired knowledge through this book, this example could
help you. In my point of view and I am sure this is shared with most of you, using formulas is quite
easy but we are going to write our own states to automate the installation of Wordpress using a
LAMP stack.
Note that this example is just a fast prototype, if you want to set it up in production environment,
you should focus more on details.

Configurations
We use two Debian servers, with Apache2, MySQL-5.5, php5, the files constituting the Wordpress
and wp-cli 33 . The latter will be helpful to automate the installation and configuration of Worpdress.
Our two Debian servers are in the same subnet. The first one will host the salt-master and the other
will host the salt-minion.
Considering that the IP of the master is:
32
https://github.com/saltstack-formulas/
33
https://github.com/wp-cli/wp-cli
Real World Examples 105

1 10.10.10.41

We should start by changing the minion configuration file in order to tell salt-minion about the IP
of salt-master :

1 master: 10.10.10.41

In the same server (minion), we set up the file_roots:

1 file_roots:
2 base:
3 - /srv/pillar

For the grains stored in the minion, we have the possibility to use files under:

1 /etc/salt/minion.d/*.conf

Let’s create one:

1 /etc/salt/minion.d/web.conf

This file and the data stored within will be used later:

1 grains:
2 apache_timeout: 600
3 keep_alive: On
4 MaxKeepAliveRequests: 150
5 KeepAliveTimeout: 10
6 html_dir: /var/www/html
7 web_user: www-data
8 db_name: wp_db
9 blog_url: 10.10.10.41/blog
10 blog_title: DevOps Blog
11 admin_email: [email protected]

The above file should begin with grains.


Explanations:
Real World Examples 106

• apache_timeout, keep_alive, MaxKeepAliveRequests and KeepAliveTimeout are some of Apache2


configurations. You have the possibility to use the other configuration options.
• html_dir : Where websites file goes. The public Apache2 server directory.
• web_user : A specific user for running Apache2 and owner of html_dir
• db_name : The blog database name
• blog_url : The blog url
• blog_title : The blog title
• admin_email : The blog administrator email

In the master we will store 2 passwords:

1 mysql_password: TCaXs54rVaVujKqhpEL6
2 wp_password: zBQHFEKPJuTUuJRmbNhW

Those passwords should be declared in the following file:

1 /srv/pillar/passwords.sls

Installing and Configuring Apache


In this part, we are going to use the following file tree:

1 /srv/salt/apache2/
2 ├── apache2.conf
3 └── apache2.sls

The file ‘apache2.sls’ contains the following information.


We tell Salt to install Apache2:

1 apache2:
2 pkg.installed:
3 - name: apache2

We verify that the service is running:


Real World Examples 107

1 service.running:
2 - name: apache2
3 - enable: True

The Apache2 package is installed, we are going to configure it :

1 file.managed:
2 - name: /etc/apache2/apache2.conf
3 - source: salt://apache2/apache2.conf
4 - template: jinja
5 - require:
6 - pkg: apache2

According to the above configuration, the file :

1 /etc/apache2/apache2.conf

will be powered by a template file (written in Jinja2).

1 salt://apache2/apache2.conf

salt:// points to the base directory declared in file_roots. In our case, the previous notation
is equivalent to the following:

1 /srv/salt/apache2/apache2.conf

The latter template will contain the Apache2 server configuration:

1 LockFile ${APACHE_LOCK_DIR}/accept.lock
2 PidFile ${APACHE_PID_FILE}
3 Timeout {{ grains['apache_timeout'] }}
4 KeepAlive {{ grains['keep_alive'] }}
5 MaxKeepAliveRequests {{ grains['MaxKeepAliveRequests'] }}
6 KeepAliveTimeout {{ grains['KeepAliveTimeout'] }}
7
8 <IfModule mpm_prefork_module>
9 StartServers 5
10 MinSpareServers 5
Real World Examples 108

11 MaxSpareServers 10
12 MaxClients 150
13 MaxRequestsPerChild 0
14 </IfModule>
15
16 <IfModule mpm_worker_module>
17 StartServers 2
18 MinSpareThreads 25
19 MaxSpareThreads 75
20 ThreadLimit 64
21 ThreadsPerChild 25
22 MaxClients 150
23 MaxRequestsPerChild 0
24 </IfModule>
25
26 <IfModule mpm_event_module>
27 StartServers 2
28 MinSpareThreads 25
29 MaxSpareThreads 75
30 ThreadLimit 64
31 ThreadsPerChild 25
32 MaxClients 150
33 MaxRequestsPerChild 0
34 </IfModule>
35
36 User www-data
37 Group www-data
38 AccessFileName .htaccess
39
40 <Files ~ "^\.ht">
41 Order allow,deny
42 Deny from all
43 Satisfy all
44 </Files>
45
46 DefaultType None
47 HostnameLookups Off
48 ErrorLog ${APACHE_LOG_DIR}/error.log
49 LogLevel warn
50 Include mods-enabled/*.load
51 Include mods-enabled/*.conf
52 Include ports.conf
Real World Examples 109

53 LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" v\


54 host_combined
55 LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
56 LogFormat "%h %l %u %t \"%r\" %>s %O" common
57 LogFormat "%{Referer}i -> %U" referer
58 LogFormat "%{User-agent}i" agent
59 Include conf.d/
60 Include sites-enabled/

Should not be confused the Apache2 variables like ${APACHE_LOCK_DIR} and Jinja2
variables like {{ grains[‘apache_timeout’] }}.
What we have finally is :

1 apache2:
2 pkg.installed:
3 - name: apache2
4 - file: apache2
5 service.running:
6 - enable: True
7 - require:
8 - pkg: apache2
9 file.managed:
10 - name: /etc/apache2/apache2.conf
11 - source: salt://apache2/apache2.conf
12 - template: jinja
13 - require:
14 - pkg: apache2

You may ask the question if our server was running Redhat or FreeBSD (or any other GNU/Linux
family) instead of Debian. This is why the grain os_family could be helpful in this case.

1 {% if grains['os_family']=="Debian" %}
2 [Debian specefic configuraton]
3 { % endif %}

To create a generic configuration, I recommend you to see the file map.jinja in saltstack-formula
github repository:
Real World Examples 110

1 {% set apache = salt['grains.filter_by']({


2 'Debian': {
3 'server': 'apache2',
4 'service': 'apache2',
5 'mod_wsgi': 'libapache2-mod-wsgi',
6 'vhostdir': '/etc/apache2/sites-available',
7 'confdir': '/etc/apache2/conf.d',
8 'confext': '',
9 'default_site': 'default',
10 'default_site_ssl': 'default-ssl',
11 'logdir': '/var/log/apache2',
12 'wwwdir': '/srv',
13 'use_require': False,
14 },
15 'RedHat': {
16 'server': 'httpd',
17 'service': 'httpd',
18 'mod_wsgi': 'mod_wsgi',
19 'vhostdir': '/etc/httpd/conf.d',
20 'confdir': '/etc/httpd/conf.d',
21 'confext': '.conf',
22 'default_site': 'default',
23 'default_site_ssl': 'default-ssl',
24 'logdir': '/var/log/httpd',
25 'wwwdir': '/var/www',
26 'use_require': False,
27 },
28 'FreeBSD': {
29 'server': 'apache22',
30 'service': 'apache22',
31
32 'mod_wsgi': 'ap22-mod_wsgi3',
33
34 'vhostdir': '/usr/local/etc/apache22/Includes',
35 'confdir': '/usr/local/etc/apache22/Includes',
36 'confext': '',
37 'default_site': 'default',
38 'default_site_ssl': 'default-ssl',
39 'logdir': '/var/log/',
40 'wwwdir': '/usr/local/www/apache22/',
41 'use_require': False,
42 },
Real World Examples 111

43 }, merge=salt['grains.filter_by']({
44 '14.04': {
45 'confext': '.conf',
46 'default_site': '000-default.conf',
47 'default_site_ssl': 'default-ssl.conf',
48 'use_require': True,
49 },
50 '14.10': {
51 'confext': '.conf',
52 'default_site': '000-default.conf',
53 'default_site_ssl': 'default-ssl.conf',
54 'use_require': True,
55 },
56 }, grain='lsb_distrib_release', merge=salt['pillar.get']('apache:lookup'))) %}

In this case, the SLS file could use variables that adapt to many contexts :

1 {% from "apache/map.jinja" import apache with context %}


2
3 apache:
4 pkg.installed:
5 - name: {{ apache.server }}
6 service.running:
7 - name: {{ apache.service }}
8 - enable: True

We will be working with Debian for the remaining parts .


In the ‘SLS’ file :

1 /srv/salt/apache2/apache2.sls

We are going to backup the apache2.conf configuration file :

1 backup_conf:
2 cmd.wait:
3 - name: "cp /etc/apache2/apache2.conf /etc/apache2/apache2.conf.backup"
4 - watch:
5 - pkg: apache2

In the code above, the cmd.wait instruction will execute a command


Real World Examples 112

1 cp /etc/apache2/apache2.conf /etc/apache2/apache2.conf.backup

but only when the package Apache2 is installed :

1 - watch:
2 - pkg: apache2

Afterwards, we create a user for the web server. This is a recommended step for better security as
you may know. The user will have the name www-data and will be added to the www-data group.

1 user_creation:
2 group:
3 - www-data
4 user:
5 - www-data
6 - home: /var/www/html/
7 - groups:
8 - www-data
9 - require:
10 - group: www-data

Finally, the apache2/apache2.sls file will be similar to this :

1 {% if grains['os_family']=="debian" %}
2 apache2:
3 pkg.installed:
4 - name: apache2
5 - file: apache2
6 service.running:
7 - enable: True
8 - require:
9 - pkg: apache2
10 file.managed:
11 - name: /etc/apache2/apache2.conf
12 - source: salt://apache2/apache2.conf
13 - template: jinja
14 - require:
15 - pkg: apache2
16
17 backup_conf:
18 cmd.wait:
Real World Examples 113

19 - name: "cp /etc/apache2/apache2.conf /etc/apache2/apache2.conf.backup"


20 - watch:
21 - pkg: apache2
22
23 user_creation:
24 group:
25 - www-data
26 user:
27 - www-data
28 - home: /var/www/html
29 - groups:
30 - www-data
31 - require:
32 - group: www-data
33 {% endif %}

### Installing and Configuring Mysql


Create this file tree:

1 /srv/salt/mysql/
2 └── mysql.sls

In the SLS file we will tell Salt to install mysql-server, php5-mysql and python-mysqldb.

1 mysql:
2 pkg.installed:
3 - name: mysql-server
4 service.running:
5 - enable: True
6 - require:
7 - pkg: mysql-server
8
9 php5-mysql:
10 pkg.installed:
11 - name: php5-mysql
12
13 python-mysqldb:
14 pkg.installed

Like SaltStack is based on Python, the package python-mysqldb will help us manage the installation,
configuration and all the operations where a Python based software should connect to a Mysql
database.
Real World Examples 114

When we want to install a package, we are not obliged to write its name in “ - name ” if the
identifier has the same name as the package. Look a the difference between mysql python-mysqldb
installation.
However, while not mandatory, it is highly recommended that you set a password for the MySQL
administrative root user. My method is to prepopulate password fields before running the install
process.

1 debconf:
2 cmd.run:
3 - name: sudo debconf-set-selections <<< "mysql-server mysql-server/root_pass\
4 word password {{ pillar['mysql_password'] }}"
5 debconf_bis:
6 cmd.run:
7 - name: sudo debconf-set-selections <<< "mysql-server mysql-server/root_pass\
8 word_again password {{ pillar['mysql_password'] }}"

The package “phpMyAdmin” is automatically installed with the MySQL server on Debian server,
just make accessible from web browser.

1 {% if 0 == salt['cmd.retcode']("test -d {{ grains['html_dir'] }}/phpmyadmin") %}


2 sl_phpmyadmin:
3 cmd.run:
4 - name: "ln -s /usr/share/phpmyadmin {{ grains['html_dir'] }}/phpmyadmin"
5 - require:
6 - service: mysql
7 {% endif %}

The latter code will check if the folder:

1 /var/www/html/phpmyadmin

exists.
Otherwise, it will create a symbolic link from the phpMyAdmin installation directory to the root
directory of websites. Since some prefer to have

1 /var/www/

as a root directory for sites, others prefer to use a default configuration:


Real World Examples 115

1 /var/www/html/

that’s why the html_dir is a grain. This means that you can adapt your configuration according to
user’s preferences.

To create a symbolic link, SaltStack gives us the the possibility to use file.symlink function
instead of executing the system command ln -s.

1 /path/to/file:
2 file.symlink:
3 - target: /path/to/target

For now, we are going to create a database for our blog:

1 create_db:
2 cmd.run:
3 - name: mysql -u root -p{{ pillar['mysql_password'] }} -e "CREATE DATABASE I\
4 F NOT EXISTS {{ grains['db_name'] }}"

This is the equivalent of creating a Mysql database using:

1 mysql -u <user> -p<password> -e "<sql query>"

According to the configuration file:

1 /etc/salt/minion.d/web.conf

the database will have wp_db as a name.


If you do not know mysqltuner.pl, this is the opportunity to take a look at this script. I was using
Mysql and MariaDB since years and I found that it is really useful to use mysqltuner.pl to make
security and performance audits.
To use it just download and execute it:

1 wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.p\
2 l -O mt.pl; chmod +x mt.pl; perl mt.pl > mt.log

Because we can execute GNU/Linux commands directly from SaltStack, we are going to include the
last command in our configuration files:
Real World Examples 116

1 mysql_tuner:
2 cmd:
3 - run
4 - cwd: /tmp
5 - name: "wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master\
6 /mysqltuner.pl -O mt.pl; perl mt.pl>mt.log"
7 - require:
8 - service: mysql

We will get finally a mysql.sls file that looks like this:

1 debconf:
2 cmd.run:
3 - name: sudo debconf-set-selections <<< "mysql-server mysql-server/root_pass\
4 word password {{ pillar['mysql_password'] }}"
5
6 debconf_bis:
7 cmd.run:
8 - name: sudo debconf-set-selections <<< "mysql-server mysql-server/root_pass\
9 word_again password {{ pillar['mysql_password'] }}"
10
11
12 mysql:
13 pkg.installed:
14 - name: mysql-server
15 service.running:
16 - enable: True
17 - require:
18 - pkg: mysql-server
19
20 php5-mysql:
21 pkg.installed:
22 - name: php5-mysql
23
24 python-mysqldb:
25 pkg.installed
26
27 {% if 0 == salt['cmd.retcode']('test -d {{ grains['html_dir'] }}/phpmyadmin') %}
28
29 sl_phpmyadmin:
30 cmd.run:
31 - name: "ln -s /usr/share/phpmyadmin {{ grains['html_dir'] }}/phpmyadmin"
Real World Examples 117

32 - require:
33 - service: mysql
34 {% endif %}
35
36 create_db:
37 cmd.run:
38 - name: mysql -u root -p{{ pillar['mysql_password'] }} -e "CREATE DATABASE I\
39 F NOT EXISTS {{ grains['db_name'] }}"
40
41 mysql_tuner:
42 cmd:
43 - run
44 - cwd: /tmp
45 - name: "wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master\
46 /mysqltuner.pl -O mt.pl; perl mt.pl>mt.log"
47 - require:
48 - service: mysql

Installing and Configuring PHP


As usual, we will proceed with the creation of the file tree:

1 /srv/salt/php5/
2 └── php5.sls

The SLS file is quite simple:

1 php5:
2 pkg.installed:
3 - name: php5
4 php5-cli:
5 pkg.installed:
6 - name: php5-cli

In the last configuration, Salt is asked to install php5-cli and php5. The first is needed to run dynamic
web pages in PHP under Apache2 and the second is necessary to execute PHP commands (In our
case it will be used with wp-cli).

Installing and Configuring Wordpress


First of all, we create the necessary files:
Real World Examples 118

1 /srv/salt/wordpress/
2 └── wp-cli.sls

The second step is to tell Salt to create the directory of our Wordpress installation:

1 {{ grains['html_dir'] }}/blog:
2 file.directory:
3 - user: www-data
4 - group: www-data
5 - mode: 755
6 - makedirs: False
7 - recurse:
8 - user
9 - group
10 - mode

The directory will be created in:

1 /var/www/html/blog

The last directory should be owned (for security reasons) by www-data user and the www-data
group, it will have 755 as a chmod configuration. The usage of recurse will apply this configuration
(user, group and mode) recursively.
With makedirs the directory will not be created if it is already there.
We will then download wp-cli and use it to download the Wordpress files to html_dir.

1 wp_cli_download:
2 cmd.run:
3 - cwd: {{ grains['html_dir'] }}/blog
4 - name: curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/pha\
5 r/wp-cli.phar > wp-cli.phar
6
7 wp_cli_mod:
8 cmd.run:
9 - cwd: {{ grains['html_dir'] }}/blog
10 - name: chmod +x wp-cli.phar
11
12 wp_cli_path:
13 cmd.run:
14 - cwd: {{ grains['html_dir'] }}/blog
Real World Examples 119

15 - name: mv wp-cli.phar /usr/local/bin/wp


16
17 core_download:
18 cmd.run:
19 - cwd: {{ grains['html_dir'] }}/blog
20 - name: wp core download --force
21 - user: {{ grains['web_user'] }}

A Wordpress website is based on the configuration in the wp-config.php file. We will check if the
file already exists and in this case we will delete it to re-create and re-configure it:

1 {% if 0 == salt['cmd.retcode']("test -s {{ grains['html_dir'] }}/blog/wp-config.\


2 php") %}
3
4 wp_config_remove:
5 cmd.run:
6 - cwd: {{ grains['html_dir'] }}/blog
7 - name: rm -f {{ grains['html_dir'] }}/blog/wp-config.php
8 - force: True
9
10 wp_config:
11 cmd.run:
12 - cwd: {{ grains['html_dir'] }}/blog
13 - name: wp core config --dbname={{ grains['db_name'] }} --dbuser=root --dbp\
14 ass={{ pillar['mysql_password'] }} --skip-check
15 - user: {{ grains['web_user'] }}
16 - force: True
17 {% endif %}

Note that other wp-cli commands can be found in its online official documentation. By typing the
command wp, you will have an almost full control of your Wordpress website.
All commands issued by Salt must be executed with the user www-data and this is why we use:

1 user: {{ grains['web_user'] }}

Remember that ‘grains’ are stored in:

1 /etc/salt/minion.d/*.conf

and that we created:


Real World Examples 120

1 /etc/salt/minion.d/web.conf

with the following configuration:

1 grains:
2 apache_timeout: 600
3 keep_alive: On
4 MaxKeepAliveRequests: 150
5 KeepAliveTimeout: 10
6 html_dir: /var/www/html
7 web_user: www-data
8 db_name: wp_db
9 blog_url: 10.10.10.41/blog
10 blog_title: DevOps Blog
11 admin_email: [email protected]

Scheduling Wordpress Monitoring Tasks


You can just use the example detailed in this chapter about scheduling monitoring tasks.
We have used the following command to add a monitoring task:

1 salt '*' schedule.add sched_monitoring function='cmd.run' job_args="['wget \"www\


2 .eon01.com\" --timeout 30 -O - 2>/dev/null | grep \"dev & ops\" || echo \"The si\
3 te is down\" | mail -s \"Your site is down\" [email protected]']" seconds=10

Here is another example to use ‘Curl’ with detailed output about your infrastructure, network and
application.
First create this file:

1 echo -e \
2 “time_namelookup: %{time_namelookup}\n
3 time_connect: %{time_connect}\n
4 time_appconnect: %{time_appconnect}\n
5 time_pretransfer: %{time_pretransfer}\n
6 time_redirect: %{time_redirect}\n
7 time_starttransfer: %{time_starttransfer}\n
8 ----------\n
9 time_total: %{time_total}\n” > curl_detailed_response.conf

We will add the next command to the schedule tasks list:


Real World Examples 121

1 curl -w "@curl_detailed_response.conf" -o /dev/null -s "http://10.10.10.41/blog"

This is how we do:

1 salt '*' schedule.add sched_monitoring function='cmd.run' job_args="['curl -w \"\


2 @curl_detailed_response.conf\" -o /dev/null -s \"http://10.10.10.41/blog\" >> cu\
3 rl_results.log']"

It is recommended to go back to the the Scheduling Monitoring Tasks Using SaltStack part of this
book.
Well, you can also imagine other scenarios to monitor where you will put in practice what you have
learned through this book.

Conclusion
Finally, you need to execute:

1 salt '*' state.highstate

to execute the installation of your Wordpress blog with Apache2, PHP and Mysql. Everything will be
configured like you have set it. A good manner to do things with Salt is to debug before launching
anything especially if you are running hotfixes in a production environment:

1 salt-call '*' state.highstate

This part of SaltStack For DevOps detailed the various steps in order to practice a concrete example
of an installation and a post-installation of a LAMP stack with a website that runs, but the security
and production environments constraints were not taken into consideration at 100%, so be careful,
test before you deploy and think before you type.

Docker Quick-Start
To run Docker, I used Ubuntu 14.04 and a 64-bit installation (required), and I followed the official
tutorial from Docker documentation. Normally you can run it on any other Ubuntu version or
distributions without problems unless your Linux Kernel is older than 3.10.
Just type the following command to be sure:
Real World Examples 122

1 uname -r

If it is ok, add the gpg key:

1 sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58\


2 118E89F3A912897C070ADBF76221572C52609D

then ad the repository to your source list:


Create the docker.list file if it does not exist:

1 touch /etc/apt/sources.list.d/docker.list

then

1 echo “deb https://apt.dockerproject.org/repo ubuntu-trusty main” | tee -a /etc/a\


2 pt/sources.list.d/docker.list

and finally install Docker:

1 apt-get update && apt-get purge lxc-docker && apt-get install docker-engine

If you want to use Docker with your current user, type:

1 sudo usermod -aG docker your_user_name

otherwise, for the next instructions you should use the root user:

1 su

For more details about installing Docker on Ubuntu check the official documentation34 .

Getting Docker Container System Information Using


SaltStack
Now that ‘Docker’ is installed, let’s create a directory as our workspace:
34
https://docs.docker.com/engine/installation/ubuntulinux/
Real World Examples 123

1 mkdir docker-salt; cd docker-salt

If you think about this, there are many ways to work with SaltStack and Docker:

• Install a masterless minion inside the Docker guest and use salt-call to have an established
remote-execution environment inside Docker.
• Do the same thing but mount your local /srv/salt/ to your guest
• Control the guest salt-minion with the host salt-master

In this book I tried to keep it stupidly simple (KISS) and I avoided unnecessary complexity, that’s
why we will consider the first scenario and then you will see that it is easy to use the second.
Let’s create the guest salt-minion:

1 echo "file_client: local" > minion

Create your Dockerfile with your favorite code editor (vi, vim, nano ..etc) and as a basic example,
let’s put this code:

1 FROM ubuntu:14.04
2
3 MAINTAINER Aymen El Amri [email protected]
4
5 RUN apt-get update
6
7 RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -q wget
8 RUN wget -O - https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK\
9 -GPG-KEY.pub | sudo apt-key add -
10 RUN echo "deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty mai\
11 n"|tee -a /etc/apt/sources.list
12
13 RUN apt-get update
14 RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -q salt-minion
15
16 ADD minion /etc/salt/
17
18 RUN service salt-minion start
19
20 EXPOSE 8080
21 CMD /bin/bash
Real World Examples 124

This Dockerfile will install Ubuntu, salt-minion and will copy the minion configuration file
to /etc/minion on the Docker machine before starting salt-minion.
Make sure your are in the same directory containing Dockerfile and build your image that we will
call docker-salt:

1 docker build --rm=true -t docker-salt .

Run the container from the built image:

1 docker run -t -i -d -h docker-salt-host --name docker-salt-container docker-s\


2 alt

The last commands will run the image called docker-salt on a container that we called
docker-salt-container. The Docker guest machine will have docker-salt-host as a host name (/etc/host-
name).
Make sure your container is up.
When you type

1 docker ps

you should see it:

1 CONTAINER ID IMAGE COMMAND CREATED \


2 STATUS PORTS NAMES
3 fa4112583e73 docker-salt:latest "/bin/bash" 2 seconds ago \
4 Up 2 seconds docker-salt-container

In general CONTAINER ID(in our case fa4112583e73) could be used also when working with
Docker.
To test if everything is ok, let’s execute salt-call inside the container.

Remember, salt-call command is used to run SaltStack functions on a masterless minion.


Let’s get some system/networking information about our host like (SSDs, bios release date, bios
version, cpu flags, cpu model, cpu architecture, fqdn, hostname, network interfaces, kernel and
kernel release ..etc
One simple command:
Real World Examples 125

1 docker exec -t -i docker-salt-container salt-call grains.items

will print all the above information:

1 local:
2 ----------
3 SSDs:
4 biosreleasedate:
5 /dev/mem: No such file or directory
6 biosversion:
7 /dev/mem: No such file or directory
8 cpu_flags:
9 - fpu
10 - vme
11 - de
12 - pse
13 - tsc
14 - msr
15 - pae
16 - mce
17 - cx8
18 - apic
19 - sep
20 - mtrr
21 - pge
22 - mca
23 - cmov
24 - pat
25 - pse36
26 - clflush
27 - dts
28 - acpi
29 - mmx
30 - fxsr
31 - sse
32 - sse2
33 - ss
34 - ht
35 - tm
36 - pbe
37 - syscall
38 - nx
Real World Examples 126

39 - pdpe1gb
40 - rdtscp
41 - lm
42 - constant_tsc
43 - arch_perfmon
44 - pebs
45 - bts
46 - rep_good
47 - nopl
48 - xtopology
49 - nonstop_tsc
50 - aperfmperf
51 - eagerfpu
52 - pni
53 - pclmulqdq
54 - dtes64
55 - monitor
56 - ds_cpl
57 - vmx
58 - est
59 - tm2
60 - ssse3
61 - fma
62 - cx16
63 - xtpr
64 - pdcm
65 - pcid
66 - sse4_1
67 - sse4_2
68 - movbe
69 - popcnt
70 - tsc_deadline_timer
71 - aes
72 - xsave
73 - avx
74 - f16c
75 - rdrand
76 - lahf_lm
77 - abm
78 - ida
79 - arat
80 - epb
Real World Examples 127

81 - xsaveopt
82 - pln
83 - pts
84 - dtherm
85 - tpr_shadow
86 - vnmi
87 - flexpriority
88 - ept
89 - vpid
90 - fsgsbase
91 - tsc_adjust
92 - bmi1
93 - avx2
94 - smep
95 - bmi2
96 - erms
97 - invpcid
98 cpu_model:
99 Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz
100 cpuarch:
101 x86_64
102 domain:
103 fqdn:
104 docker-salt-host
105 fqdn_ip4:
106 - 172.17.0.10
107 fqdn_ip6:
108 gpus:
109 host:
110 docker-salt-host
111 hwaddr_interfaces:
112 ----------
113 eth0:
114 02:42:ac:11:00:0a
115 lo:
116 00:00:00:00:00:00
117 id:
118 docker-salt-host
119 init:
120 unknown
121 ip4_interfaces:
122 ----------
Real World Examples 128

123 eth0:
124 - 172.17.0.10
125 lo:
126 - 127.0.0.1
127 ip6_interfaces:
128 ----------
129 eth0:
130 - fe80::42:acff:fe11:a
131 lo:
132 - ::1
133 ip_interfaces:
134 ----------
135 eth0:
136 - 172.17.0.10
137 - fe80::42:acff:fe11:a
138 lo:
139 - 127.0.0.1
140 - ::1
141 ipv4:
142 - 127.0.0.1
143 - 172.17.0.10
144 ipv6:
145 - ::1
146 - fe80::42:acff:fe11:a
147 kernel:
148 Linux
149 kernelrelease:
150 3.16.0-38-generic
151 locale_info:
152 ----------
153 defaultencoding:
154 None
155 defaultlanguage:
156 None
157 detectedencoding:
158 ANSI_X3.4-1968
159 localhost:
160 docker-salt-host
161 lsb_distrib_codename:
162 trusty
163 lsb_distrib_description:
164 Ubuntu 14.04.3 LTS
Real World Examples 129

165 lsb_distrib_id:
166 Ubuntu
167 lsb_distrib_release:
168 14.04
169 manufacturer:
170 /dev/mem: No such file or directory
171 master:
172 salt
173 mdadm:
174 mem_total:
175 7636
176 nodename:
177 docker-salt-host
178 num_cpus:
179 4
180 num_gpus:
181 0
182 os:
183 Ubuntu
184 os_family:
185 Debian
186 osarch:
187 amd64
188 oscodename:
189 trusty
190 osfinger:
191 Ubuntu-14.04
192 osfullname:
193 Ubuntu
194 osrelease:
195 14.04
196 osrelease_info:
197 - 14
198 - 4
199 path:
200 /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
201 productname:
202 /dev/mem: No such file or directory
203 ps:
204 ps -efHww
205 pythonexecutable:
206 /usr/bin/python
Real World Examples 130

207 pythonpath:
208 - /usr/bin
209 - /usr/lib/python2.7
210 - /usr/lib/python2.7/plat-x86_64-linux-gnu
211 - /usr/lib/python2.7/lib-tk
212 - /usr/lib/python2.7/lib-old
213 - /usr/lib/python2.7/lib-dynload
214 - /usr/local/lib/python2.7/dist-packages
215 - /usr/lib/python2.7/dist-packages
216 pythonversion:
217 - 2
218 - 7
219 - 6
220 - final
221 - 0
222 saltpath:
223 /usr/lib/python2.7/dist-packages/salt
224 saltversion:
225 2015.8.3
226 saltversioninfo:
227 - 2015
228 - 8
229 - 3
230 - 0
231 serialnumber:
232 /dev/mem: No such file or directory
233 server_id:
234 324894828
235 shell:
236 /bin/sh
237 virtual:
238 physical
239 virtual_subtype:
240 Docker
241 zmqversion:
242 4.0.4

Monitoring Docker Containers Using SaltStack


We will not have many changes in our Vagrantfile, just installing psutil library that Salt uses to get
system metrics.
Real World Examples 131

1 FROM ubuntu:14.04
2
3 MAINTAINER Aymen El Amri [email protected]
4
5 RUN apt-get update
6
7 RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -q wget
8 RUN wget -O - https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK\
9 -GPG-KEY.pub | sudo apt-key add -
10 RUN echo "deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty mai\
11 n"|tee -a /etc/apt/sources.list
12
13 RUN apt-get update
14 RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -q salt-minion
15
16 RUN apt-get install python-dev -y
17 RUN wget https://bootstrap.pypa.io/get-pip.py
18 RUN python get-pip.py
19 RUN /usr/bin/yes | pip install psutil
20
21 ADD minion /etc/salt/
22
23 RUN service salt-minion start
24
25 EXPOSE 8080
26 CMD /bin/bash

Build and run then you can get monitoring information easily:

• Boot time:

1 docker exec -t -i docker-salt-container salt-call ps.boot_time --output json

Output

1 {
2 "local": 1450992213
3 }

• CPU usage percentage:


Real World Examples 132

1 docker exec -t -i docker-salt-container salt-call ps.cpu_percent --output json

Output

1 {
2 "local": 25.0
3 }

• Disk I/O:

1 docker exec -t -i docker-salt-container salt-call ps.disk_io_counters --output \


2 json

Output

1 {
2 "local": {
3 "read_time": 819864,
4 "write_bytes": 3998978048,
5 "read_bytes": 1336084480,
6 "write_time": 2441316,
7 "read_count": 54610,
8 "write_count": 146160
9 }
10 }

You can use other function like:

• disk_partition_usage
• cpu_times
• disk_partition_usage
• disk_partitions
• disk_usage
• get_pid_list
• get_users
• network_io_counters
• num_cpus
• proc_info <pid>
• swap_memory
• top
• total_physical_memory
• virtual_memory
Real World Examples 133

Provisioning Docker Containers With SaltStack


Follow the previous steps to setup your Docker build then the container where the masterless salt-
minion is installed.

1 docker build --rm=true -t docker-salt .


2 docker run -t -i -d -h docker-salt-host --name docker-salt-container docker-s\
3 alt

Make sure your container is running by using ps:

1 docker ps -l

You should see a container with the name docker-salt-container:

1 CONTAINER ID IMAGE COMMAND CREATED \


2 STATUS PORTS NAMES
3 43591d8dff92 docker-salt:latest "/bin/sh -c /bin/bas 11 minutes ago \
4 Up 11 minutes 8080/tcp docker-salt-container

Note that if you want to stop your container you should use the following command:

1 docker stop docker-salt-container

Stopping a container is not removing it. If you would like to remove it type:

1 docker rm docker-salt-container

Now we want to install packages inside the running container using SaltStack. Well, it is simple,
let’s install vim:

1 docker exec -t -i docker-salt-container salt-call pkg.install vim

You should see an output similar to the following one:


Real World Examples 134

1 [INFO ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package\


2 } ${Version} ${Architecture}\n', '-W'] in directory '/root'
3 [INFO ] Executing command ['apt-get', '-q', '-y', '-o', 'DPkg::Options::=--fo\
4 rce-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'vim'] in dire\
5 ctory '/root'
6 [INFO ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package\
7 } ${Version} ${Architecture}\n', '-W'] in directory '/root'
8 local:
9 ----------
10 libgpm2:
11 ----------
12 new:
13 1.20.4-6.1
14 old:
15 vim:
16 ----------
17 new:
18 2:7.4.052-1ubuntu3
19 old:
20 vim-runtime:
21 ----------
22 new:
23 2:7.4.052-1ubuntu3
24 old:

Let’s see what was the version of vim:

1 docker exec -t -i docker-salt-container salt-call pkg.version "vim"

Output:

1 [INFO ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package\


2 } ${Version} ${Architecture}\n', '-W'] in directory '/root'
3 local:
4 2:7.4.052-1ubuntu3

If you would like to remove vim, just type:

1 docker exec -t -i docker-salt-container salt-call pkg.remove "vim"

and it will be removed from system:


Real World Examples 135

1 [INFO ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package\


2 } ${Version} ${Architecture}\n', '-W'] in directory '/root'
3 local:

Please note that if the container exits you will lose data: vim will not be installed. You
should commit changes to your container. Type the next command:

1 docker commit 5ac064ad5d91 docker-salt

This will commit the changes happened in the container with the mentioned id to the image called
docker-salt. Of course replace 5ac064ad5d91 by you container id (get it with docker ps command).
Note that docker-salt is the image name.
Let’s verify if everything is committed to the image, we stop the image, remove it, run it again and
get vim version. If we will have the version then the package is installed (even after removing the
container) and if we are not going to see a version, this certainly means that vim is not installed.

1 docker stop docker-salt-container


2 docker rm docker-salt-container
3 docker run -t -i -d -h docker-salt-host --name docker-salt-container docker-s\
4 alt
5 docker exec -t -i docker-salt-container salt-call pkg.version "vim"

Using Salt Cloud With Linode Cloud


Linode (a portmanteau of the words linux and node) is a virtual private server/cloud hosting provider
offering SSD Linux servers and simple management tools.
Since we are just learning, we will be using the Linode 1GB server so that everyone reading this
book could be able to work with Salt Cloud on a real remote server without paying too much.
Real World Examples 136

Linode Pricing

Follow those steps to set up a Linode key that will be used with Salt Cloud:

• Go to Linode35
• Create a free account
• After confirming the account from your email, log in
• Click on My Profile
• Re-authenticate to unlock the settings
• Click on API keys
• Generate a key (I gave it the name salt_cloud_demo)

Save the generated key on your local machine since you will not be able to view it again
on the web manager.
Please note that I will use this file structure when I work with Salt Cloud:

35
https://manager.linode.com/session/signup
Real World Examples 137

1 /etc/salt/
2 ├── cloud
3 ├── cloud.conf.d/
4 ├── cloud.deploy.d/
5 ├── cloud.maps.d/
6 ├── cloud.profiles.d/

Let’s set up the provider configuration file for Linode:

1 /etc/salt/cloud.providers.d/linode.providers.conf

Linode Provider Configuration


We are using a simple configuration that I called linode-config.

1 linode-config:
2 provider: linode
3 apikey: O61KWcOQ
4 password: *************
5 ssh_pubkey: ssh-rsa xzt4Gg3WjHZgFvRU8EMUyVSxwwilKogg0TK0BkbY0gq+IADKMxhuq9yW4O\
6 nxlWcO+n2VkdaFZlAIh+/Sll7HdfmBclvGbw== root@eonSpider
7 ssh_key_file: /root/.ssh/id_rsa

As you can see in the configuration, we are using the apikey provided by Linode with the password.
You should also configure the ssh_pubkey that you are using on your machine (in this example, I
am executing Salt Cloud command from my local machine on a remote Linode server). If you do
not have a public key, generate a key pair (public/private) using this command:

1 ssh-keygen

Linode Profile Configuration


To create a Linode server using Salt Cloud, you need to configure a profile for a VM.
Before trying that, it will be a good idea to test if we do not have problems with the provider
configuration. To test it, let’s play with some commands.
Let’s view the different Linode sizes:

1 salt-cloud --list-sizes linode-config

You should see something like this:


Real World Examples 138

1 linode-config:
2 ----------
3 linode:
4 ----------
5 Linode 1024:
6 ----------
7 bandwidth:
8 2000
9 disk:
10 24576
11 driver:
12 extra:
13 ----------
14 get_uuid:
15 id:
16 1
17 name:
18 Linode 1024
19 price:
20 10.0
21 ram:
22 1024
23 uuid:
24 03e18728ce4629e2ac07c9cbb48afffb8cb499c4
25 Linode 16384:
26 ----------
27 bandwidth:
28 16000
29 disk:
30 393216
31 driver:
32 extra:
33 ----------
34 get_uuid:
35 id:
36 7
37 name:
38 Linode 16384
39 price:
40 160.0
41 ram:
42 16384
Real World Examples 139

43 uuid:
44 a93683ef7f41a3a7ccb14777d95f03bb6b046ec6
45 Linode 2048:
46 ----------
47 bandwidth:
48 3000
49 disk:
50 49152
51 driver:
52 extra:
53 ----------
54 get_uuid:
55 id:
56 2
57 name:
58 Linode 2048
59 price:
60 20.0
61 ram:
62 2048
63 uuid:
64 5a1c572b48a26aa78799facaad413d901a73a673
65 Linode 32768:
66 ----------
67 bandwidth:
68 20000
69 disk:
70 786432
71 driver:
72 extra:
73 ----------
74 get_uuid:
75 id:
76 8
77 name:
78 Linode 32768
79 price:
80 320.0
81 ram:
82 32768
83 uuid:
84 b3586dbcfcb67d3aba9093c472b852bc860846a6
Real World Examples 140

85 Linode 4096:
86 ----------
87 bandwidth:
88 4000
89 disk:
90 98304
91 driver:
92 extra:
93 ----------
94 get_uuid:
95 id:
96 4
97 name:
98 Linode 4096
99 price:
100 40.0
101 ram:
102 4096
103 uuid:
104 c08ae025461ecc5f845ede0d95e1cb3e61a8f13b
105 Linode 49152:
106 ----------
107 bandwidth:
108 20000
109 disk:
110 1179648
111 driver:
112 extra:
113 ----------
114 get_uuid:
115 id:
116 9
117 name:
118 Linode 49152
119 price:
120 480.0
121 ram:
122 49152
123 uuid:
124 954b931001923ab473861c7d427863c3b1df8de7
125 Linode 65536:
126 ----------
Real World Examples 141

127 bandwidth:
128 20000
129 disk:
130 1572864
131 driver:
132 extra:
133 ----------
134 get_uuid:
135 id:
136 10
137 name:
138 Linode 65536
139 price:
140 640.0
141 ram:
142 65536
143 uuid:
144 ca78fb5bfed412bd33d434dbd06340de377423c8
145 Linode 8192:
146 ----------
147 bandwidth:
148 8000
149 disk:
150 196608
151 driver:
152 extra:
153 ----------
154 get_uuid:
155 id:
156 6
157 name:
158 Linode 8192
159 price:
160 80.0
161 ram:
162 8192
163 uuid:
164 9501e716d5ee4502bbeaecf0d4daa2fe2862200f
165 Linode 98304:
166 ----------
167 bandwidth:
168 20000
Real World Examples 142

169 disk:
170 1966080
171 driver:
172 extra:
173 ----------
174 get_uuid:
175 id:
176 12
177 name:
178 Linode 98304
179 price:
180 960.0
181 ram:
182 98304
183 uuid:
184 d0dc3fdd2a6cdf54aff0e0213e493dff6d80e5cf

If you had the latter output, congratulation, you configuration works.


Let’s try to see the distributions that we can use:

1 salt-cloud --list-images linode-config

You will see the images that you can use on your Linode server:

1 linode-config:
2 ----------
3 linode:
4 ----------
5 Arch Linux 2015.02:
6 ----------
7 driver:
8 extra:
9 ----------
10 64bit:
11 1
12 pvops:
13 1
14 get_uuid:
15 id:
16 138
17 name:
Real World Examples 143

18 Arch Linux 2015.02


19 uuid:
20 74a8e6d46c4e30312ab39aa499702dbc615db1ea
21 Arch Linux 2015.08:
22 ----------
23 driver:
24 extra:
25 ----------
26 64bit:
27 1
28 pvops:
29 0
30 get_uuid:
31 id:
32 142
33 name:
34 Arch Linux 2015.08
35 uuid:
36 7fb9cf7b1ece5101e4056f9de9fc7f1945cde0ef
37 CentOS 5.6:
38 ----------
39 driver:
40 extra:
41 ----------
42 64bit:
43 1
44 pvops:
45 1
46 get_uuid:
47 id:
48 60
49 name:
50 CentOS 5.6
51 uuid:
52 f17af5e85ef920dba6c54efb1f7e059bb814e17a
53 CentOS 6.5:
54 ----------
55 driver:
56 extra:
57 ----------
58 64bit:
59 1
Real World Examples 144

60 pvops:
61 1
62 get_uuid:
63 id:
64 127
65 name:
66 CentOS 6.5
67 uuid:
68 f12d308795a507cc73a3cf5f7aacdf2d86fbcf4a
69 CentOS 7:
70 ----------
71 driver:
72 extra:
73 ----------
74 64bit:
75 1
76 pvops:
77 1
78 get_uuid:
79 id:
80 129
81 name:
82 CentOS 7
83 uuid:
84 87e25228eddb0c32db87b502b3bf4a9aaf64d15c
85 Debian 6:
86 ----------
87 driver:
88 extra:
89 ----------
90 64bit:
91 1
92 pvops:
93 1
94 get_uuid:
95 id:
96 78
97 name:
98 Debian 6
99 uuid:
100 87d8bc1d0236a365f444138cf4158e404c121e75
101 Debian 7:
Real World Examples 145

102 ----------
103 driver:
104 extra:
105 ----------
106 64bit:
107 1
108 pvops:
109 1
110 get_uuid:
111 id:
112 130
113 name:
114 Debian 7
115 uuid:
116 2619a13cd4e5bc5f10fa04b617231c7085167e6a
117 Debian 8.1:
118 ----------
119 driver:
120 extra:
121 ----------
122 64bit:
123 1
124 pvops:
125 1
126 get_uuid:
127 id:
128 140
129 name:
130 Debian 8.1
131 uuid:
132 57ccf3e88161c13987b6f98976c7bdf791237dbf
133 Fedora 20:
134 ----------
135 driver:
136 extra:
137 ----------
138 64bit:
139 1
140 pvops:
141 1
142 get_uuid:
143 id:
Real World Examples 146

144 122
145 name:
146 Fedora 20
147 uuid:
148 b69c15db35d7b987ecf8d6d206c47b22b97e2e81
149 Fedora 21:
150 ----------
151 driver:
152 extra:
153 ----------
154 64bit:
155 1
156 pvops:
157 1
158 get_uuid:
159 id:
160 134
161 name:
162 Fedora 21
163 uuid:
164 35a3258a19bbaf60f137ffbefcf06475e145a0d8
165 Fedora 22:
166 ----------
167 driver:
168 extra:
169 ----------
170 64bit:
171 1
172 pvops:
173 1
174 get_uuid:
175 id:
176 141
177 name:
178 Fedora 22
179 uuid:
180 fd93443f93e0248202f9f2e2dad0935697537f20
181 Gentoo 2013-11-26:
182 ----------
183 driver:
184 extra:
185 ----------
Real World Examples 147

186 64bit:
187 1
188 pvops:
189 1
190 get_uuid:
191 id:
192 118
193 name:
194 Gentoo 2013-11-26
195 uuid:
196 aec7ded0f24b21dc761d3eacaa2f696c07067a56
197 Gentoo 2014.12:
198 ----------
199 driver:
200 extra:
201 ----------
202 64bit:
203 1
204 pvops:
205 1
206 get_uuid:
207 id:
208 137
209 name:
210 Gentoo 2014.12
211 uuid:
212 a69dd4adb372e2b1599c4fab23994df70ca3253f
213 Slackware 13.37:
214 ----------
215 driver:
216 extra:
217 ----------
218 64bit:
219 1
220 pvops:
221 1
222 get_uuid:
223 id:
224 87
225 name:
226 Slackware 13.37
227 uuid:
Real World Examples 148

228 8ad9944af892c7a355f43062b0056652e4aea626
229 Slackware 13.37 32bit:
230 ----------
231 driver:
232 extra:
233 ----------
234 64bit:
235 0
236 pvops:
237 1
238 get_uuid:
239 id:
240 86
241 name:
242 Slackware 13.37 32bit
243 uuid:
244 83c081a5c8d2355942b6cd1b03efee145559c546
245 Slackware 14.1:
246 ----------
247 driver:
248 extra:
249 ----------
250 64bit:
251 1
252 pvops:
253 1
254 get_uuid:
255 id:
256 117
257 name:
258 Slackware 14.1
259 uuid:
260 076e7f332f204bf9ae08896bf3d10a2e464a054b
261 Ubuntu 12.04 LTS:
262 ----------
263 driver:
264 extra:
265 ----------
266 64bit:
267 1
268 pvops:
269 1
Real World Examples 149

270 get_uuid:
271 id:
272 126
273 name:
274 Ubuntu 12.04 LTS
275 uuid:
276 f5cadba59311f026e739794ae454d04228323aa9
277 Ubuntu 14.04 LTS:
278 ----------
279 driver:
280 extra:
281 ----------
282 64bit:
283 1
284 pvops:
285 1
286 get_uuid:
287 id:
288 124
289 name:
290 Ubuntu 14.04 LTS
291 uuid:
292 18be6ebe9bb4f9a818f95a522ac213cfdf295e84
293 Ubuntu 15.04:
294 ----------
295 driver:
296 extra:
297 ----------
298 64bit:
299 1
300 pvops:
301 1
302 get_uuid:
303 id:
304 139
305 name:
306 Ubuntu 15.04
307 uuid:
308 28c371ac941cf095abd16c29591d271d2417a925
309 Ubuntu 15.10:
310 ----------
311 driver:
Real World Examples 150

312 extra:
313 ----------
314 64bit:
315 1
316 pvops:
317 1
318 get_uuid:
319 id:
320 144
321 name:
322 Ubuntu 15.10
323 uuid:
324 e9884caca236aa493211b58cecfe5513f1f949fc
325 openSUSE 13.1:
326 ----------
327 driver:
328 extra:
329 ----------
330 64bit:
331 1
332 pvops:
333 1
334 get_uuid:
335 id:
336 120
337 name:
338 openSUSE 13.1
339 uuid:
340 e676f73133eb146eafb0df9218ef6520f5b01b56
341 openSUSE 13.2:
342 ----------
343 driver:
344 extra:
345 ----------
346 64bit:
347 1
348 pvops:
349 1
350 get_uuid:
351 id:
352 135
353 name:
Real World Examples 151

354 openSUSE 13.2


355 uuid:
356 29d348ebca4fb639eaa4809939ecef4bbfe5b9c9

You can request other information like the different location that you can choose when deploying a
new Linode machine:

1 salt-cloud --list-location linode-config

Here is the output:

1 linode-config:
2 ----------
3 linode:
4 ----------
5 Atlanta, GA, USA:
6 ----------
7 country:
8 US
9 driver:
10 id:
11 4
12 name:
13 Atlanta, GA, USA
14 Dallas, TX, USA:
15 ----------
16 country:
17 US
18 driver:
19 id:
20 2
21 name:
22 Dallas, TX, USA
23 Frankfurt, DE:
24 ----------
25 country:
26 ??
27 driver:
28 id:
29 10
30 name:
31 Frankfurt, DE
Real World Examples 152

32 Fremont, CA, USA:


33 ----------
34 country:
35 US
36 driver:
37 id:
38 3
39 name:
40 Fremont, CA, USA
41 London, England, UK:
42 ----------
43 country:
44 GB
45 driver:
46 id:
47 7
48 name:
49 London, England, UK
50 Newark, NJ, USA:
51 ----------
52 country:
53 US
54 driver:
55 id:
56 6
57 name:
58 Newark, NJ, USA
59 Singapore, SG:
60 ----------
61 country:
62 ??
63 driver:
64 id:
65 9
66 name:
67 Singapore, SG
68 Tokyo, JP:
69 ----------
70 country:
71 JP
72 driver:
73 id:
Real World Examples 153

74 8
75 name:
76 Tokyo, JP

Salt Cloud offers many others possibilities to interface with Linode API, the following list of
functionalities can be used:

1 salt.cloud.clouds.linode.avail_images(call=None)
2 salt.cloud.clouds.linode.avail_locations(call=None)
3 salt.cloud.clouds.linode.avail_sizes(call=None)
4 salt.cloud.clouds.linode.boot(name=None, kwargs=None, call=None)
5 salt.cloud.clouds.linode.clone(kwargs=None, call=None)
6 salt.cloud.clouds.linode.create(vm_)
7 salt.cloud.clouds.linode.create_config(kwargs=None, call=None)
8 salt.cloud.clouds.linode.create_disk_from_distro(vm_, linode_id, swap_size=None)
9 salt.cloud.clouds.linode.create_private_ip(vm_, linode_id)
10 salt.cloud.clouds.linode.create_swap_disk(vm_, linode_id, swap_size=None)
11 salt.cloud.clouds.linode.destroy(name, call=None)
12 salt.cloud.clouds.linode.get_config_id(kwargs=None, call=None)
13 salt.cloud.clouds.linode.get_configured_provider()
14 salt.cloud.clouds.linode.get_datacenter_id(location)
15 salt.cloud.clouds.linode.get_disk_size(vm_, swap, linode_id)
16 salt.cloud.clouds.linode.get_distribution_id(vm_)
17 salt.cloud.clouds.linode.get_ips(linode_id=None)
18 salt.cloud.clouds.linode.get_linode(kwargs=None, call=None)
19 salt.cloud.clouds.linode.get_linode_id_from_name(name)
20 salt.cloud.clouds.linode.get_password(vm_)
21 salt.cloud.clouds.linode.get_plan_id(kwargs=None, call=None)
22 salt.cloud.clouds.linode.get_private_ip(vm_)
23 salt.cloud.clouds.linode.get_pub_key(vm_)
24 salt.cloud.clouds.linode.get_swap_size(vm_)
25 salt.cloud.clouds.linode.get_vm_size(vm_)
26 salt.cloud.clouds.linode.list_nodes(call=None)
27 salt.cloud.clouds.linode.list_nodes_full(call=None)
28 salt.cloud.clouds.linode.list_nodes_min(call=None)
29 salt.cloud.clouds.linode.list_nodes_select(call=None)
30 salt.cloud.clouds.linode.reboot(name, call=None)
31 salt.cloud.clouds.linode.show_instance(name, call=None)
32 salt.cloud.clouds.linode.show_pricing(kwargs=None, call=None)
33 salt.cloud.clouds.linode.start(name, call=None)
34 salt.cloud.clouds.linode.stop(name, call=None)
35 salt.cloud.clouds.linode.update_linode(linode_id, update_args=None)
Real World Examples 154

You can find more details about this on the Salt Linode documentation page36 .

Using Salt Cloud To Automate Linode VPS Creation


We had already made a progress concerning the configurations of Salt Linode as a provider, and
now we are going to test the profile.
The profile will allow us to create Cloud VMs so let’s create a profile for a testing machine.
This is the configuration of our testing VM: - We are going to choose a Linode 1024 server - We are
going to install Ubuntu 14.04 TLS - The region will be Frankfurt - Our new machine will be set up
as a minion and its master will be your local machine - This machine is for testing purpose, we will
add a grain called role with the value testing_server

To configure the master of the new minion machine you can just put your IP. To get your
IP without leaving your terminal, you can use the following command: curl icanhazip.com.
According to our description below, the Salt Cloud profile file will be similar to the next one:

1 ubuntu_linode:
2 provider: linode-config
3 size: Linode 1024
4 image: Ubuntu 14.04 LTS
5 location: Frankfurt, DE
6 minion:
7 master: 20a1:e35:9beb:6a00:2293:5229:2400:a57f # This is an IP v6.
8 grains:
9 role: testing_server

Nice ! Let’s start our machine testing_server based on ubuntu_linode profile:

1 sudo salt-cloud -p ubuntu_linode testing_server

You can debug your output:

1 sudo salt-cloud -p ubuntu_linode testing_server -l debug

You will be able to see the detailed logs ef the execution:

36
https://docs.saltstack.com/en/latest/ref/clouds/all/salt.cloud.clouds.linode.html
Real World Examples 155

1 DEBUG ] Reading configuration from /etc/salt/cloud


2 [DEBUG ] Reading configuration from /etc/salt/master
3 [DEBUG ] Using cached minion ID from /etc/salt/minion_id: eonSpider
4 [DEBUG ] Missing configuration file: /etc/salt/cloud.providers
5 [DEBUG ] Including configuration from '/etc/salt/cloud.providers.d/linode.prov\
6 iders.conf'
7 [DEBUG ] Reading configuration from /etc/salt/cloud.providers.d/linode.provide\
8 rs.conf
9 [DEBUG ] Missing configuration file: /etc/salt/cloud.profiles
10 [DEBUG ] Including configuration from '/etc/salt/cloud.profiles.d/linode.profi\
11 les.conf'
12 [DEBUG ] Reading configuration from /etc/salt/cloud.profiles.d/linode.profiles\
13 .conf
14 [DEBUG ] Configuration file path: /etc/salt/cloud
15 [INFO ] salt-cloud starting
16 [DEBUG ] Could not LazyLoad parallels.avail_sizes
17 [DEBUG ] LazyLoaded parallels.avail_locations
18 [DEBUG ] LazyLoaded proxmox.avail_sizes
19 [DEBUG ] Could not LazyLoad saltify.destroy
20 [DEBUG ] Could not LazyLoad saltify.avail_sizes
21 [DEBUG ] Could not LazyLoad saltify.avail_images
22 [DEBUG ] Could not LazyLoad saltify.avail_locations
23 [DEBUG ] LazyLoaded rackspace.reboot
24 [DEBUG ] LazyLoaded openstack.list_locations
25 [DEBUG ] LazyLoaded rackspace.list_locations
26 [DEBUG ] Could not LazyLoad linode.optimize_providers
27 [DEBUG ] The 'linode' cloud driver is unable to be optimized.
28 [DEBUG ] Could not LazyLoad parallels.avail_sizes
29 [DEBUG ] LazyLoaded parallels.avail_locations
30 [DEBUG ] LazyLoaded proxmox.avail_sizes
31 [DEBUG ] Could not LazyLoad saltify.destroy
32 [DEBUG ] Could not LazyLoad saltify.avail_sizes
33 [DEBUG ] Could not LazyLoad saltify.avail_images
34 [DEBUG ] Could not LazyLoad saltify.avail_locations
35 [DEBUG ] LazyLoaded rackspace.reboot
36 [DEBUG ] LazyLoaded openstack.list_locations
37 [DEBUG ] LazyLoaded rackspace.list_locations
38 [DEBUG ] Generating minion keys for 'testing_server'
39 [DEBUG ] MasterEvent PUB socket URI: ipc:///var/run/salt/master/master_event_p\
40 ub.ipc
41 [DEBUG ] MasterEvent PULL socket URI: ipc:///var/run/salt/master/master_event_\
42 pull.ipc
Real World Examples 156

43 [DEBUG ] Sending event - data = {'_stamp': '2016-02-25T18:34:26.465790'}


44 [DEBUG ] Sending event - data = {'profile': 'ubuntu_linode', 'event': 'startin\
45 g create', '_stamp': '2016-02-25T18:34:27.281133', 'name': 'testing_server', 'pr\
46 ovider': 'linode-config:linode'}
47 [INFO ] Creating Cloud VM testing_server
48 [DEBUG ] MasterEvent PUB socket URI: ipc:///var/run/salt/master/master_event_p\
49 ub.ipc
50 [DEBUG ] MasterEvent PULL socket URI: ipc:///var/run/salt/master/master_event_\
51 pull.ipc
52 [DEBUG ] Sending event - data = {'_stamp': '2016-02-25T18:34:32.476974'}
53 [DEBUG ] Sending event - data = {'_stamp': '2016-02-25T18:34:32.490534', 'even\
54 t': 'requesting instance', 'kwargs': {'name': 'testing_server', 'ex_rsize': 2444\
55 8, 'image': 'Ubuntu 14.04 LTS', 'ex_swap': 128, 'ex_private': False, 'size': 'Li\
56 node 1024', 'location': 'Frankfurt, DE'}}
57 [INFO ] Rendering deploy script: /usr/lib/python2.7/dist-packages/salt/cloud/\
58 deploy/bootstrap-salt.sh
59 [DEBUG ] MasterEvent PUB socket URI: ipc:///var/run/salt/master/master_event_p\
60 ub.ipc
61 [DEBUG ] MasterEvent PULL socket URI: ipc:///var/run/salt/master/master_event_\
62 pull.ipc
63 [DEBUG ] Sending event - data = {'_stamp': '2016-02-25T18:34:45.385425'}
64 [DEBUG ] Sending event - data = {'_stamp': '2016-02-25T18:34:45.386326', 'even\
65 t': 'executing deploy script', 'kwar
66 ...
67 ...
68 ...
69 * INFO: System Information:
70 * INFO: CPU: GenuineIntel
71 * INFO: CPU Arch: x86_64
72 * INFO: OS Name: Linux
73 * INFO: OS Version: 4.4.0-x86_64-linode63
74 * INFO: Distribution: Ubuntu 14.04
75 ...
76 ...
77 ...
78 DEBUG ] Connection to 1.1.1.1 closed.
79 [DEBUG ] Removed /tmp/.saltcloud-b0a0a690-2106-44f2-a81a-25d16487f2bb/minion
80 [DEBUG ] MasterEvent PUB socket URI: ipc:///var/run/salt/master/master_event_p\
81 ub.ipc
82 [DEBUG ] MasterEvent PULL socket URI: ipc:///var/run/salt/master/master_event_\
83 pull.ipc
84 [DEBUG ] Sending event - data = {'_stamp': '2016-02-25T18:36:51.304932'}
Real World Examples 157

85 [DEBUG ] Sending event - data = {'host': '1.1.1.1', 'name': 'testing_server', \


86 '_stamp': '2016-02-25T18:36:51.305734', 'event': 'testing_server has been deploy\
87 ed at 1.1.1.1'}
88 [INFO ] Salt installed on testing_server
89 [INFO ] Created Cloud VM 'testing_server'
90 [DEBUG ] 'testing_server' VM creation details:
91 <Node: uuid=737ff2078c4e822e9b992c9de99e94d23f80a331, name=testing_server, state\
92 =3, public_ips=['1.1.1.1'], private_ips=[], provider=Linode ...>
93 [DEBUG ] MasterEvent PUB socket URI: ipc:///var/run/salt/master/master_event_p\
94 ub.ipc
95 [DEBUG ] MasterEvent PULL socket URI: ipc:///var/run/salt/master/master_event_\
96 pull.ipc
97 [DEBUG ] Sending event - data = {'_stamp': '2016-02-25T18:36:51.357829'}
98 [DEBUG ] Sending event - data = {'profile': 'ubuntu_linode', 'event': 'created\
99 instance', '_stamp': '2016-02-25T18:36:51.362601', 'name': 'testing_server', 'p\
100 rovider': 'linode-config:linode'}
101 [DEBUG ] LazyLoaded nested.output

Here is an example without activating the debug:

1 testing_server:
2 ----------
3 _uuid:
4 None
5 deployed:
6 True
7 driver:
8 extra:
9 ----------
10 ALERT_BWIN_ENABLED:
11 1
12 ALERT_BWIN_THRESHOLD:
13 10
14 ALERT_BWOUT_ENABLED:
15 1
16 ALERT_BWOUT_THRESHOLD:
17 10
18 ALERT_BWQUOTA_ENABLED:
19 1
20 ALERT_BWQUOTA_THRESHOLD:
21 80
22 ALERT_CPU_ENABLED:
Real World Examples 158

23 1
24 ALERT_CPU_THRESHOLD:
25 90
26 ALERT_DISKIO_ENABLED:
27 1
28 ALERT_DISKIO_THRESHOLD:
29 10000
30 BACKUPSENABLED:
31 0
32 BACKUPWEEKLYDAY:
33 0
34 BACKUPWINDOW:
35 0
36 CREATE_DT:
37 2016-02-25 13:49:49.0
38 DATACENTERID:
39 10
40 DISTRIBUTIONVENDOR:
41 Ubuntu
42 ISKVM:
43 1
44 ISXEN:
45 0
46 LABEL:
47 testing_server
48 LINODEID:
49 1682323
50 LPM_DISPLAYGROUP:
51 PLANID:
52 1
53 STATUS:
54 0
55 TOTALHD:
56 24576
57 TOTALRAM:
58 1024
59 TOTALXFER:
60 2000
61 WATCHDOG:
62 1
63 id:
64 1682323
Real World Examples 159

65 image:
66 None
67 name:
68 testing_server
69 private_ips:
70 public_ips:
71 - 1.1.1.1
72 size:
73 None
74 state:
75 3

Now you can check either your salt-minion is set up by using:

1 sudo salt '*' test.ping

The result:

1 testing_server:
2 True

Congratulation, you have a running salt-minion Linode server.


If you want to set another profile, you can use the same profile file. In the next example, we are
creating a testing server and a production server:

1 ubuntu_linode_test:
2 provider: linode-config
3 size: Linode 1024
4 image: Ubuntu 14.04 LTS
5 location: Frankfurt, DE
6 minion:
7 master: 92.169.23.1
8 grains:
9 role: testing_server
10
11 ubuntu_linode_production:
12 provider: linode-config
13 size: Linode 1024
14 image: Ubuntu 14.04 LTS
15 location: Frankfurt, DE
16 minion:
Real World Examples 160

17 master: 92.169.23.1
18 grains:
19 role: production_server

You can also create two separate files ubuntu_linode_test and ubuntu_linode_production.
To create multiple Linode machines having the same profile, you can just simply type:

1 sudo salt-cloud -p ubuntu_linode_test test_machine_1 test_machine_2

Note that you can use the -P to create the two machines in parallel.

1 sudo salt-cloud -P -p ubuntu_linode_test test_machine_1 test_machine_2

In the next step, we are going to use Salt Cloud Maps.


Imagine you have a lists of VMs to create across multiple cloud providers. Provisioning your
infrastructure means creating multiple machines with different configurations. When using Salt
Cloud Maps, creating your infrastructure is easier than creating it with only Providers and Profiles.
In the map file, any VMs that already exist will remain the same and will not be modified and any
new VMs will be created.
To keep things organized, we will save all of our Salt Cloud Maps in the following directory:

1 /etc/salt/cloud.maps.d/

Imagine we had already set several profiles with Linode as a cloud provider. - linode_1024 - linode_-
2048 - linode_4096
and we would like to create three environments: - Developement - Staging - Production
and in every environment we need 3 servers: - Nginx - Mysql - Redis
This map file will simplify the creation of this infrastructure:

1 linode_1024:
2 - redis_dev
3 - redis_staging
4 - redis_production
5 linode_2048:
6 - nginx_dev
7 - nginx_staging
8 - nginx_production
9 linode_4096:
10 - mysql_dev
11 - mysql_staging
12 - mysql_production

Then we save the file under:


Real World Examples 161

1 /etc/salt/cloud.maps.d/my_infrastructure.maps.conf

To create those servers we type:

1 sudo salt-cloud -m /etc/salt/cloud.maps.d/my_infrastructure.maps.conf

Parallelize the last operation like this:

1 sudo salt-cloud -m /etc/salt/cloud.maps.d/my_infrastructure.maps.conf -P

Debug it like this:

1 sudo salt-cloud -m /etc/salt/cloud.maps.d/my_infrastructure.maps.conf -P -l debug

And send the debug to a file with

1 --out-file

Manipulating Linode VMs with Salt Cloud


With Salt Cloud you can reboot, destroy and execute other operations on your remote Cloud VMs.
To destroy a VM:

1 salt-cloud -d minion_server

To reboot it:

1 salt-cloud -a reboot minion_server

To stop it:

1 salt-cloud -a stop minion_server

To get details about your created VMs:

1 salt-cloud --query

To clone a VM:
Real World Examples 162

1 salt-cloud -f clone linode-config linode_id=123123 datacenter_id=1 plan_id=1

You can even get the pricing:

1 salt-cloud -f show_pricing linode-config profile=linode-profile

You have probably noticed that sometimes with use -a and sometimes -f. The difference is that:

• a is for action like executing operations


• f is for function like getting information

This is the list of all actions by provider:

1 all providers:
2 - reboot
3 ec2:
4 - start
5 - stop
6 joyent:
7 - stop
8 linode:
9 - start
10 - stop

Functions are more specific to each provider (or driver) and need specific names due to the difference
between Cloud providers APIs, but there are three universal functions:

• list_nodes: To get general information about the instances for a given provider
• list_nodes_full: Same thing as list_nodes but with more information
• list_nodes_select: Returns select information about the instances for the given provider

This is a complete list of what you can do with Linode Rest API :
Real World Examples 163

1 salt.cloud.clouds.linode.avail_images(call=None)
2 salt.cloud.clouds.linode.avail_locations(call=None)
3 salt.cloud.clouds.linode.avail_sizes(call=None)
4 salt.cloud.clouds.linode.boot(name=None, kwargs=None, call=None)
5 salt.cloud.clouds.linode.clone(kwargs=None, call=None)
6 salt.cloud.clouds.linode.create(vm_)
7 salt.cloud.clouds.linode.create_config(kwargs=None, call=None)
8 salt.cloud.clouds.linode.create_disk_from_distro(vm_, linode_id, swap_size=None)
9 salt.cloud.clouds.linode.create_private_ip(vm_, linode_id)
10 salt.cloud.clouds.linode.create_swap_disk(vm_, linode_id, swap_size=None)
11 salt.cloud.clouds.linode.destroy(name, call=None)
12 salt.cloud.clouds.linode.get_config_id(kwargs=None, call=None)
13 salt.cloud.clouds.linode.get_configured_provider()
14 salt.cloud.clouds.linode.get_datacenter_id(location)
15 salt.cloud.clouds.linode.get_disk_size(vm_, swap, linode_id)
16 salt.cloud.clouds.linode.get_distribution_id(vm_)
17 salt.cloud.clouds.linode.get_ips(linode_id=None)
18 salt.cloud.clouds.linode.get_linode(kwargs=None, call=None)
19 salt.cloud.clouds.linode.get_linode_id_from_name(name)
20 salt.cloud.clouds.linode.get_password(vm_)
21 salt.cloud.clouds.linode.get_plan_id(kwargs=None, call=None)
22 salt.cloud.clouds.linode.get_private_ip(vm_)
23 salt.cloud.clouds.linode.get_pub_key(vm_)
24 salt.cloud.clouds.linode.get_swap_size(vm_)
25 salt.cloud.clouds.linode.get_vm_size(vm_)
26 salt.cloud.clouds.linode.list_nodes(call=None)
27 salt.cloud.clouds.linode.list_nodes_full(call=None)
28 salt.cloud.clouds.linode.list_nodes_min(call=None)
29 salt.cloud.clouds.linode.list_nodes_select(call=None)
30 salt.cloud.clouds.linode.reboot(name, call=None)
31 salt.cloud.clouds.linode.show_instance(name, call=None)
32 salt.cloud.clouds.linode.show_pricing(kwargs=None, call=None)
33 salt.cloud.clouds.linode.start(name, call=None)
34 salt.cloud.clouds.linode.stop(name, call=None)
35 salt.cloud.clouds.linode.update_linode(linode_id, update_args=None)

More details can be found in the official documentation: https://docs.saltstack.com/en/latest/ref/clouds/all/salt.cloud

Using Salt Cloud With Amazon Web Services (AWS)


Working with ec2 machines is almost similar to working with Linode VMs.
Real World Examples 164

AWS Provider Configuration


First thing we are going to do here is to check our key pairs.
If you have multiple key pairs, you are going to use one of them in the Salt Cloud Provider
configuration, that’s why you should search for your key name.
You can use this link to list all of your key pairs:

1 https://eu-west-1.console.aws.amazon.com/ec2/v2/home?region=eu-west-1#KeyPairs:s\
2 ort=keyName

If you are using a different region, change the eu-west-1 by the region you are considering. In the
next example we are using us-east-1 as the region:

1 https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#KeyPairs:sort=keyName

To check which key name corresponds to the used key pair, use this command (after installing aws
kit):

1 ec2-fingerprint-key kp.pem

This will help you match your key pair to the fingerprint and allows you to check the name of your
key.
In this example our key is called kp and it is located under:

1 /etc/salt/kp.pem

Now go and get your security credentials for accessing your ec2 instances, you can find the id here:

1 https://console.aws.amazon.com/iam/home?#security_credential

You can not get your key from there, but normally you would have kept this in a secret place, if you
lost this, you should generate two other key pairs.
You should also know other things like the name of the security group you want to use and your
ssh_username:

• Amazon Linux -> ec2-user


• RHEL -> ec2-user
• CentOS -> ec2-user
Real World Examples 165

• Ubuntu -> ubuntu


• etc ..

Another thing to set up is the ssh_interface that could have two different values:

• private_ips -> The salt-cloud command is run inside the ec2


• public_ips -> The salt-cloud command is run outside of ec2

This is a generic example of a provider configuration using private_ips:

1 ec2-private:
2
3 # Set up the location of the salt master
4 minion:
5 master: saltmaster.myhost.com
6
7 # Set up grains information, which will be common for all nodes using this pro\
8 vider
9 grains:
10 env: test
11
12 # Specify whether to use public or private IP for deploy script.
13 ssh_interface: private_ips
14
15
16 # Set the EC2 access credentials
17 id: 'use-instance-role-credentials'
18 key: 'use-instance-role-credentials'
19
20 # Make sure this key is owned by root with permissions 0400.
21 private_key: /etc/salt/test_key.pem
22 keyname: test_key
23
24 securitygroup: default
25
26 # This is optional but you can set up your default region and availability zon\
27 e (optional)
28 location: eu-west-1
29 availability_zone: eu-west-1c
30
31 # Salt Cloud will use this user name to deploy, it depends on your AMI.
Real World Examples 166

32 # Amazon Linux -> ec2-user


33 # RHEL -> ec2-user
34 # CentOS -> ec2-user
35 # Ubuntu -> ubuntu
36 #
37 ssh_username: ubuntu
38
39 # Optionally add an IAM profile
40 iam_profile: 'arn:aws:iam::123456789012:instance-profile/ExampleInstanceProfil\
41 e'
42
43 provider: ec2

With a public_ips, we will have something similar to this configuration:

1 ec2-public:
2
3 minion:
4 master: saltmaster.myhost.com
5
6 ssh_interface: public_ips
7
8 id: 'use-instance-role-credentials'
9 key: 'use-instance-role-credentials'
10
11 private_key: /etc/salt/test_key.pem
12 keyname: test_key
13
14 securitygroup: default
15
16 location: eu-west-1
17 availability_zone: eu-west-1c
18
19 ssh_username: ubuntu
20
21 iam_profile: 'my other profile name'
22
23 provider: ec2

Please note two things:

• Previously, the suggested provider for aws ec2 was the aws provider. This has been deprecated
in favor of the ec2 provider.
Real World Examples 167

• The provider parameter in cloud provider definitions was renamed to driver (since the version
2015.8.0).

AWS Profile Configuration


Let’s set up the profile to provide more “ec2-specific” configuration options.
In the profile configuration we should provide the provider, the image id, the size of the instance
and the ssh_username which is Ubuntu since our image is also based on Ubuntu.

1 provider: ec2-private
2 image: ami-a609b6d5
3 size: t2.micro
4 ssh_username: ubuntu

If we want to add a volume (10Gb), we can do it like this:

1 volumes:
2 - { size: 10, device: /dev/sdf }

Suppose we want to add two other volumes while choosing the iops (Input/Output per second), we
could add a similar configuration to the next one:

1 volumes:
2 - { size: 10, device: /dev/sdf }
3 - { size: 300, device: /dev/sdg, type: io1, iops: 3000 }
4 - { size: 300, device: /dev/sdh, type: io1, iops: 3000 }

Note that to use an EBS optimised ec2 instance we should declare it:

1 ebs_optimized: True

We can also add tags to our new instance and it will be applied to all ec2 instances created using
this profile:

1 tag: {'env': 'test', 'role': 'redis'}

We have the possibility to force grains synchronization by adding:


Real World Examples 168

1 sync_after_install: grains

One thing that I usually automate is setting my own configurations, like the .vimrc file, you can
automate things like this by adding a script that will be executed:

1 script: /etc/salt/cloud.deploy.d/configure_vim.sh

Network configuration is also accessible using Salt Cloud, here is an example where the primary
IP address is the private address and the ec2 instance will have a public IP (not an elastic IP) with
subnet id and a security group id:

1 network_interfaces:
2 - DeviceIndex: 0
3 PrivateIpAddresses:
4 - Primary: True
5 AssociatePublicIpAddress: True
6 SubnetId: subnet-142f4bdd
7 SecurityGroupId:
8 - sg-750af531

If you prefer the EIP (Elastic IP):

1 allocate_new_eips: True

We want to delete root volume when we destroy our ec2 instance:

1 del_root_vol_on_destroy: True

When a machine is terminated, we want to delete all not-root EBS volumes for an instance:

1 del_all_vol_on_destroy: True

Now we have a functional ec2 profile:


Real World Examples 169

1 base_ec2_private:
2 provider: ec2-private
3 image: ami-a609b6d5
4 size: t2.micro
5 ssh_username: ubuntu
6
7 volumes:
8 - { size: 10, device: /dev/sdf }
9 - { size: 300, device: /dev/sdg, type: io1, iops: 3000 }
10 - { size: 300, device: /dev/sdh, type: io1, iops: 3000 }
11
12 tag: {'env': 'test', 'role': 'redis'}
13
14 sync_after_install: grains
15
16 script: /etc/salt/cloud.deploy.d/configure_vim.sh
17
18 network_interfaces:
19 - DeviceIndex: 0
20 PrivateIpAddresses:
21 - Primary: True
22 #auto assign public ip (not EIP)
23 AssociatePublicIpAddress: True
24 SubnetId: subnet-813d4bbf
25 SecurityGroupId:
26 - sg-750af531
27
28 del_root_vol_on_destroy: True
29 del_all_vol_on_destroy: True

When we want to create a similar profile to the last one but we would like to change one or two
options, we can use extends like in the following example:

1 base_ec2_private:
2 provider: ec2-private
3 image: ami-a609b6d5
4 size: t2.micro
5 ssh_username: ubuntu
6
7 volumes:
8 - { size: 10, device: /dev/sdf }
9 - { size: 300, device: /dev/sdg, type: io1, iops: 3000 }
Real World Examples 170

10 - { size: 300, device: /dev/sdh, type: io1, iops: 3000 }


11
12 tag: {'env': 'test', 'role': 'redis'}
13
14 sync_after_install: grains
15
16 script: /etc/salt/cloud.deploy.d/configure_vim.sh
17
18 network_interfaces:
19 - DeviceIndex: 0
20 PrivateIpAddresses:
21 - Primary: True
22 AssociatePublicIpAddress: True
23 SubnetId: subnet-813d4bbf
24 SecurityGroupId:
25 - sg-750af531
26
27 del_root_vol_on_destroy: True
28 del_all_vol_on_destroy: True
29
30 base_ec2_public:
31 provider: ec2-private
32 extends: base_ec2_private

The last example was just provided to help you understand the use of extends, it was not
tested.

Using Salt Cloud To Automate AWS EC2 Creation


Starting a private* ec2* instance can be done like this:

1 salt-cloud -p base_ec2_private private_minion

and launching a public one can be done using this command:

1 salt-cloud -p base_ec2_public public_minion

Like we have done when we saw how to use Salt Maps with Linode, nothing is different from using
it with aws:
Real World Examples 171

1 ec2_private:
2 - redis
3 - mysql
4
5 ec2_public:
6 - web_1
7 - web_2

and then we can start our ec2 instances using:

1 salt-cloud -m /etc/salt/cloud.map.app -P

Salt Cloud allows getting, setting and deleting tags after launching the ec2 instance using the instance
name (or the instance id):

1 salt-cloud -a get_tags ec2_minion


2 salt-cloud -a set_tags ec2_minion tag1=value1 tag2=value2
3 salt-cloud -a del_tags ec2_minion tag1,tag2

It also allows renaming the machine:

1 salt-cloud -a rename ec2_minion newname=ec2_my_minion

To enable termination protection, Salt Cloud can be used like in the following command:

1 salt-cloud -a enable_term_protect ec2_minion

Other options are available:

1 salt-cloud -a show_term_protect ec2_minion


2 salt-cloud -a disable_term_protect ec2_minion

using Salt Cloud from command line allows adding volumes and specific configurations like
choosing a snapshot to create a volume:
Creating a simple volume in a specific zone:

1 salt-cloud -f create_volume ec2 zone=eu-west-1c

Adding size:
Real World Examples 172

1 salt-cloud -f create_volume ec2 zone=eu-west-1c size=100

Choosing a snapshot:

1 salt-cloud -f create_volume ec2 zone=eu-west-1c snapshot=snapshot_id

Selecting the type (standard, gp2, io1 ..etc):

1 salt-cloud -f create_volume ec2 size=100 type=standard


2 salt-cloud -f create_volume ec2 size=100 type=gp2
3 salt-cloud -f create_volume ec2 size=200 type=io1 iops=2000

Detaching a volume then deleting it:

1 salt-cloud -a detach_volume ec2_minion volume_id=vol_id


2 salt-cloud -f delete_volume ec2 volume_id=vol_id
SaltStack Cheat Sheet

This is basically a small project that I started on Github37 that contains some basic SaltStack
commands, may be the most used ones and it is suitable for people who want to start learning
practical examples quickly. The Github repository is public and everyone can contribute to it.

Installing SaltStack - Ubuntu 14.*


1 wget -O - https://repo.saltstack.com/apt/ubuntu/ubuntu14/latest/SALTSTACK-GPG-KE\
2 Y.pub | sudo apt-key add -
3 echo 'deb http://repo.saltstack.com/apt/ubuntu/ubuntu14/latest trusty main' | su\
4 do tee -a /etc/apt/sources.list
5 sudo apt-get update
6
7 # Master installation
8 apt-get install salt-master
9
10 # Minion installation
11 apt-get install salt-minion
12
13 # Salt ssh installation
14 apt-get install salt-ssh
15
16 # Salt syndic installation
17 apt-get install salt-syndic
18
19 # Salt API installation
20 apt-get install salt-api

Bootstrapping Salt Minion

37
https://github.com/eon01/SaltStackCheatSheet
SaltStack Cheat Sheet 174

1 curl -L https://bootstrap.saltstack.com -o install_salt.sh && sudo sh install_sa\


2 lt.sh

Salt Key Management


1 # Listing Salt requests
2 salt-key -L
3
4 # Accepting all requests
5 salt-key -A
6
7 # Accepting a single request (from myNode)
8 salt-key -a myNode
9
10 # Removing the key of a Salt 'myNode' Minion
11 salt-key -d minion_id

Debugging
1 # Debugging the master
2 salt-master -l debug
3
4 # Debugging the minion
5 salt-minion -l debug
6
7 # Restarting the minion without cache
8 stop master/minion
9 rm -rf /var/cache/salt
10 start master/minion

SaltStack Documentation
SaltStack Cheat Sheet 175

1 # Viewing all the documentation


2 salt '*' sys.doc
3
4 # Viewing a module documentation
5 salt '*' sys.doc module_name
6
7 #Examples:
8 salt '*' sys.doc status
9 salt '*' sys.doc pkg
10 salt '*' sys.doc network
11 salt '*' sys.doc system
12 salt '*' sys.doc cloud
13
14 # Viewing a function documentation
15 salt '*' sys.doc module_name function_name
16
17 # Examples:
18 salt '*' sys.doc auth django
19 salt '*' sys.doc sdb sqlite3

SaltStack Modules And Functions


1 salt '*' sys.list_modules
2 salt '*' sys.list_functions

Compound Matchers

Letter Match Type Example Alt Delimiter?]


G Grains glob G@os:Ubuntu Yes
E PCRE Minion ID E@web\d+.(dev|qa|prod).loc No
P Grains PCRE P@os:(RedHat|Fedora|CentOS) Yes
L List of minions [email protected],minion3.domain.com
No
or bl*.domain.com
I Pillar glob I@pdata:foobar Yes
J Pillar PCRE J@pdata:^(foo|bar)$ Yes
S Subnet/IP address [email protected]/24 or No
[email protected]
R Range cluster R@%foo.bar No

Other examples:
SaltStack Cheat Sheet 176

1 # Examples taken from: https://docs.saltstack.com/en/latest/topics/targeting/com\


2 pound.html
3
4 # Joining
5 salt -C 'webserv* and G@os:Debian or E@web-dc1-srv.*' test.ping
6 salt -C '( ms-1 or G@id:ms-3 ) and G@id:ms-3' test.ping
7
8 # Excluding
9 salt -C 'not web-dc1-srv' test.ping

Upgrades & Versions


1 #
2 # Listing upgrades
3 salt '*' pkg.list_upgrades
4
5 # Upgrading
6 salt '*' pkg.upgrade
7
8 # List the packages currently installed as a dict
9 salt '*' pkg.list_pkgs versions_as_list=True
10
11 # Refresh the pkgutil repo database
12 salt '*' pkgutil.refresh_db
13
14 # Check the version of a package
15 salt '*' pkgutil.version mongodb

Packages Manipulation
1 # Installation
2 salt '*' pkg.install apache2
3
4 # Latest version installation
5 salt '*' pkgutil.latest_version mysql-common
6
7 # Removing package(s)
8 salt '*' pkg.remove vim
9
10 # Purging package(s)
11 salt '*' pkg.purge apache2 mysql-server
SaltStack Cheat Sheet 177

Reboot & Uptime


1 # Reboot
2 salt '*' system.reboot
3
4 #Uptime
5 salt '*' status.uptime

Using Grains
1 # Syncing grains
2 salt '*' saltutil.sync_grains
3
4 # Available grains can be listed by using the ‘grains.ls’ module:
5 salt '*' grains.ls
6
7 # Grains data can be listed by using the ‘grains.items’ module:
8 salt '*' grains.items
9
10 # Grains have values that could be called via ‘grains.get <grain_name>’ (path is\
11 the name of a grain)
12 salt '*' grains.get path

Syncing Data
1 # Syncing grains
2 salt '*' saltutil.sync_grains
3
4 # Syncing everything from grains to modules, outputters, renderers, returners, s\
5 tates and utils.
6 salt '*' saltutil.sync_all

Running System Commands


1 salt "*" cmd.run "ls -lrth /data"
2 salt "*" cmd.run "df -kh /data"
3 salt "*" cmd.run "du -sh /data"

Working With Services


SaltStack Cheat Sheet 178

1 # Apache example
2
3 # Checking if service is available
4 salt '*' service.available apache2
5
6 # Manipulating Apache2 service
7 salt '*' service.status apache2
8 salt '*' service.start apache2
9 salt '*' service.restart apache2
10 salt '*' service.stop apache2

Network Management
1 # Get IP of your minion
2 salt '*' network.ip_addrs
3
4 # Ping a host from your minion
5 salt '*' network.ping localhost
6
7 # Traceroute a host from your minion
8 salt '*' network.traceroute localhost
9
10 # Get hostname
11 salt '*' network.get_hostname
12
13 # Modify hostname to 'myNode'
14 salt '*' network.mod_hostname myNode
15
16 # Information on all of the running TCP connections
17 salt '*' network.active_tcp
18
19 # Return the arp table from the minion
20 salt '*' network.arp
21
22 # Test connectivity
23 salt '*' network.connect google-public-dns-a.google.com port=53 proto=udp timeou\
24 t=3
25
26 # Get default route
27 salt '*' network.default_route
28
29 # Execute dig
SaltStack Cheat Sheet 179

30 salt '*' network.dig eon01.com


31
32 # Get the MAC addres of eth0 interface
33 salt '*' network.hw_addr eth0
34
35 # Get the inet addres of eth1 interface
36 salt '*' network.interface eth1
37
38 # Get the IP addres of tun interface
39 salt '*' network.interface_ip tun

Working With HTTP Requests


1 # Get the html source code of a page
2 salt-run http.query http://eon01.com text=true
3
4 # Get the header of a page
5 salt-run http.query http://eon01.com headers=true
6
7 # Get the response code from a web server
8 salt-run http.query http://eon01.com status=true
9
10 # Sending a post request
11 salt '*' http.query http://domain.com/ method=POST params='key1=val1&key2=val2'
12
13 #

Job Management
1 # List active jobs
2 salt-run jobs.active
3
4 # List all jobs with the id and other information
5 salt-run jobs.list_jobs
6
7 # List multiple information about the job with the id:20151101225221651308 like \
8 the result output
9 salt-run jobs.lookup_jid 20151101225221651308
10
11 # Kill the job with the id:20151101225221651308
12 salt 'server' saltutil.kill_job 20151101225221651308
SaltStack Cheat Sheet 180

Scheduling Feature
1 # Schedule a job called "scheduled_job"
2 salt '*' schedule.add scheduled_job function='cmd.run' job_args="['']" seconds=10
3
4 # Enable the job
5 salt '*' schedule.enable_job scheduled_job
6
7 # Disable the job
8 salt '*' schedule.disable_job scheduled_job

Working With SLS


1 salt '*' state.show_sls

Testing States
1 salt '*' state.highstate test=True
2 salt '*' state.sls test=True
3 salt '*' state.single test=True

Load testing
1 # Starting 20 minions
2 wget https://raw.githubusercontent.com/saltstack/salt/develop/tests/minionswarm.\
3 py; python minionswarm.py -m 20 --master salt-master;

State Declaration Structure


SaltStack Cheat Sheet 181

1 # Source: https://docs.saltstack.com/en/latest/ref/states/highstate.html#state-d\
2 eclaration
3
4 # Standard declaration
5 <ID Declaration>:
6 <State Module>:
7 - <Function>
8 - <Function Arg>
9 - <Function Arg>
10 - <Function Arg>
11 - <Name>: <name>
12 - <Requisite Declaration>:
13 - <Requisite Reference>
14 - <Requisite Reference>
15
16
17 # Inline function and names
18 <ID Declaration>:
19 <State Module>.<Function>:
20 - <Function Arg>
21 - <Function Arg>
22 - <Function Arg>
23 - <Names>:
24 - <name>
25 - <name>
26 - <name>
27 - <Requisite Declaration>:
28 - <Requisite Reference>
29 - <Requisite Reference>
30
31
32 # Multiple states for single id
33 <ID Declaration>:
34 <State Module>:
35 - <Function>
36 - <Function Arg>
37 - <Name>: <name>
38 - <Requisite Declaration>:
39 - <Requisite Reference>
40 <State Module>:
41 - <Function>
42 - <Function Arg>
SaltStack Cheat Sheet 182

43 - <Names>:
44 - <name>
45 - <name>
46 - <Requisite Declaration>:
47 - <Requisite Reference>

SaltStack Github Repositories


• Django with SaltStack https://github.com/wunki/django-salted
• Salt GUI pad https://github.com/tinyclues/saltpad
• Openstack automation with SaltStack https://github.com/CSSCorp/openstack-automation
• A curated collection of working salt states and configurations for use in your saltstack setup.
https://github.com/saltops/saltmine
• These are all of the configuration files needed to built a Wordpress development environment
with Vagrant, Virtual Box and SaltStack https://github.com/paulehr/saltstack-wordpress
• Java bindings for the SaltStack API https://github.com/SUSE/saltstack-netapi-client-java
• Vim snippets for SaltStack states files https://github.com/StephenPCG/vim-snippets-salt
• Metrics for SaltStack https://github.com/pengyao/salt-metrics
• Salt GUI https://github.com/saltstack/halite
Best Practices

Introduction
Recently I saw a question on Reddit38 that caught my attention.
If you have a starter or an intermediate level, without following some basic guidelines and respecting
best practices, any task could be exhausting.
I will try to share and comment some of that Reddit comments and other best practices. I am also
trying to give some general recommendations for you as a new user of Salt.

Structuring You Pillar


Structure your pillar carefully, or it will become a mess ∼ acid_sphinx4

Pillars are used to store data, especially sensitive data that you won’t share with all minions. They
are stored in the master and shared only with the targeted minion.
Pillars are called using the top.sls file.

1 tree /srv/pillar/
2 /srv/pillar/
3 ├── sched
4 │ └── init.sls
5 └── top.sls

The latter targets all or some minions:

1 base:
2 'os:Ubuntu':
3 - nginx

Choosing meaningful names for state files like nginx here and using it to only store Nginx related
data is a good habit to have when working with pillars. Keeping the configuration file as simple as
possible, would help other users to maintain and ameliorate it.
38
https://www.reddit.com/r/saltstack/comments/3maxi3/what_did_you_guys_wish_you_knew_before_setting_up/
Best Practices 184

Organizing Grains:
Salt grains are stored by the default in the folder:

1 etc/salt/grains

There is another alternative:

1 /etc/salt/minion

Hint: Search for #grains in your default minion file


Now, imagine that you have hundreds of variables put into thousands of files. In this case, I usually
create a folder, that I call config under /etc/salt/ :

1 /etc/salt/config/app1/roles.conf

1 /etc/salt/config/app2/roles.conf

1 /etc/salt/config/app3/roles.conf

and then I call each one of them in the minion file.

1 include:
2 - /etc/salt/config/app1/roles.conf
3 - /etc/salt/config/app2/roles.conf
4 - /etc/salt/config/app3/roles.conf

Then I add my grains to those files, here is how you can do it :

1 grains:
2 variable1: value1
3 variable2: value2

Don’t forget to type grains: in the beginning of the file and then you will be able to declare your
grains without having errors during the execution of state.sls or state.highstate.

Note that the default_include in SaltStack is minion.d/*.conf


Best Practices 185

Your Own Execution Modules vs Working With Jinja


It is easier, and more maintainable to write your own python execution modules than
it is to do complex logic in jinja ∼ FakingItEveryDay

Salt modules are python files and writing your own modules is not that difficult as it seems to be.
Python is an easy-to-learn language. It is real that Jinja files become complex sometimes but it is up
to you to see and compare what should be better for your configuration management.
You could choose to keep Jina configuration or write your own execution modules in function of
many criteria:

• Readability
• Complexity
• Execution time
• Maintainability
• Modularity

I would not recommend to use one of them over the other unless I can attach the choice to a
specific context and then choose in function of some criteria like the ones listed below. This could
be controversial, from a side using Jinja was supposed to make things easier while in some cases
using Python could be faster to code. In both cases, keep things as simple as possible.

Using The Documentation


I say all the following and it sounds like I’m just a bitchy jerk face but SaltStack is legit
black magic that makes my life so much better. Once you get it working.

The documentation is wrong all the time. It is also very often incomplete. Be prepared
to read the > source code. If you don’t know Python yet, you need to start learning. ∼
bbbryson

In my humble opinion, SaltStack documentation is not disastrous at this point but it is not sufficient,
which means that you will need to search for other content like blog posts and books. This is normal
and I think most of us are used to work like this: looking at the official documentation at the first
time and the moving to practical examples posted in community blogs.
You do not need to go to the official website to search for a documentation, like Linux man pages
just type use it form your terminal:
Best Practices 186

1 sudo salt '*' sys.doc

This will output all the documentation. However, if you need pkg module documentation, type the
following command:

1 salt '*' sys.doc pkg

and you will see just the documentation to help you use pkg with Salt.
Try also:

1 salt '*' sys.doc network


2 salt '*' sys.doc system
3 salt '*' sys.doc status

This is a part of the output of status documentation:

1 'status.all_status:'
2
3 Return a composite of all status data and info for this minion.
4 Warning: There is a LOT here!
5
6 CLI Example:
7
8 salt '*' status.all_status
9
10
11 'status.cpuinfo:'
12
13 Return the CPU info for this minion
14
15 CLI Example:
16
17 salt '*' status.cpuinfo
18
19
20 'status.cpustats:'
21
22 Return the CPU stats for this minion
23
24 CLI Example:
25
Best Practices 187

26 salt '*' status.cpustats


27
28
29 'status.custom:'
30
31 Return a custom composite of status data and info for this minion,
32 based on the minion config file. An example config like might be::
33
34 status.cpustats.custom: [ 'cpu', 'ctxt', 'btime', 'processes' ]
35
36 Where status refers to status.py, cpustats is the function
37 where we get our data, and custom is this function It is followed
38 by a list of keys that we want returned.
39
40 This function is meant to replace all_status(), which returns
41 anything and everything, which we probably don't want.
42
43 By default, nothing is returned. Warning: Depending on what you
44 include, there can be a LOT here!
45
46 CLI Example:
47
48 salt '*' status.custom
49
50
51 'status.diskstats:'
52
53 Return the disk stats for this minion
54
55 CLI Example:
56
57 salt '*' status.diskstats
58
59
60 'status.diskusage:'
61
62 Return the disk usage for this minion
63
64 Usage::
65
66 salt '*' status.diskusage [paths and/or filesystem types]
67
Best Practices 188

68 CLI Example:
69
70 salt '*' status.diskusage # usage for all filesystems
71 salt '*' status.diskusage / /tmp # usage for / and /tmp
72 salt '*' status.diskusage ext? # usage for ext[234] filesystems
73 salt '*' status.diskusage / ext? # usage for / and all ext filesystems

As you can see, the documentation is quite simple and comes with examples ready to use, like disk
usage for / and /tmp

1 salt '*' status.diskusage / /tmp

or CPU statistics:

1 salt '*' status.cpustats

As a beginner, following and mastering the official examples is quite enough, in my opinion, to
practice Salt and master more complicated examples. So no worries with the documentation, in
the contrary, I recommend to follow and understand the official documentation before moving to
something else. The official examples, follows the best practices, do not hesitate to copy and modify
them for your needs.

Following Guidelines
I would like to see at least a recommended guideline from somebody. The ability to have
a standard (and maybe some namespacing on forumlas) would be much appreciated.

I totally agree with this, some guidelines and namespacing are mentioned in the official documen-
tation. Without standards, everyone could have it own namespaces conventions and sharing code
could be less efficient without this.
Here is the SLS file namespace that the official documentation describes:
The namespace for SLS files when referenced in top.sls or an include declaration follows a few simple
rules:

1. The .sls is discarded (i.e. webserver.sls becomes webserver).


2. Subdirectories can be used for better organization.
a. Each subdirectory can be represented with a dot (following the python import model)
or a slash. webserver/dev.sls can also be referred to as webserver.dev
Best Practices 189

b. Because slashes can be represented as dots, SLS files can not contain dots in the name
besides the dot for the SLS suffix. The SLS file webserver_1.0.sls can not be matched, and
webserver_1.0 would match the directory/file webserver_1/0.sls
3. A file called init.sls in a subdirectory is referred to by the path of the directory. So,
webserver/init.sls is referred to as webserver.
4. If both webserver.sls and webserver/init.sls happen to exist, webserver/init.sls will be ignored
and webserver.sls will be the file referred to as webserver.

Working on a SaltStack Formula would need to pay more attention to namespacing since it is shared
and maintained in a collaborative way (usually on Github). Some efforts are being made to do this.
Best Practices 190

Contributors on Github

SaltStack has also published an official and complete list of conventions39 to be used when creating
a formula. I recommend to focus on this when writing your first Formula.
39
https://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html
Best Practices 191

Automate Your Automation


With the growing needs of IT systems, automation is becoming one of the best practices for system
administrator. SaltStack is one of the tools that offers a great list of automation possibilities, but you
can still automate the automation, you can call Salt commands from Python, Shell or Bash scripts.
As most of system administrators, DevOps and developers were using scripts to automate builds,
deployments or any other operation, reforming those scripts by adding SaltStack could be a brilliant
idea : If you are migrating your infrastructure management to use SaltStack, make it piece by piece.
Using the Python Client API could help you. Using tools like Jenkins, Fabric ..etc are great. Using
both of them is just perfect.
Daily SysAdmin routine, like checking the status of a service could, better should be automated.
As an example, checking if Mongodb daemon (mongd) is up could done using the code below:

1 salt 'myNode' service.status mongod

The execution of the last command will output the process id of every instance of Mongo if it is
already running.
In this example, I am running a Docker machine that starts a Mongo instance:

1 $ ps -aux|grep docker
2
3 root 4060 0.0 0.3 401224 23756 ? Ssl 15:06 0:01 /usr/bin/docker\
4 daemon
5 root 4218 0.0 0.2 210368 17920 ? Sl 15:06 0:00 docker-proxy -p\
6 roto tcp -host-ip 0.0.0.0 -host-port 28017 -container-ip 172.17.0.1 -container-p\
7 ort 28017
8 root 4225 0.0 0.2 144832 18284 ? Sl 15:06 0:00 docker-proxy -p\
9 roto tcp -host-ip 0.0.0.0 -host-port 27017 -container-ip 172.17.0.1 -container-p\
10 ort 27017

Let’s take a look the output of service.status command:

1 #Command
2 salt 'myNode' service.status mongod
3 #Output
4 myNode:
5 4239

If I would to run this from a Python script using SaltStack API, I will code something similar to this:
Best Practices 192

1 #!/usr/bin/env python
2
3 # Import the Salt client library
4 import salt.client
5
6 # Create a local client object
7 client = salt.client.LocalClient()
8
9 # Make calls with the cmd method
10 ret = client.cmd('myNode', 'service.status', ['mongod'])
11
12 # Print the pid of the 'Mongo' database instance
13 print ret

Executing the last script will print the following output:

1 {'myNode': '4239'}

If you are familiar with Python you will notice that the last output is a dict (a dict or a dictionary
is a mapping object maps hashable values to arbitrary objects). You could also notice that we can
directly access 4239 by changing:

1 print ret

in the last code by

1 ret['myNode']

More elegant way to check the same thing.

1 #!/usr/bin/env python
2
3 # Import the Salt client library
4 import salt.client
5
6 # Create a local client object
7 client = salt.client.LocalClient()
8
9 # make calls with the cmd method
10 ret = client.cmd('myNode', 'service.status', ['mongod'])
11
Best Practices 193

12 if ret:
13 exit(0)
14
15 exit(1)

The last script could be interfaced with Jenkins or another continuous integration server since the
build will automatically fail when the exit value is not 0.

Start Your Cheat Sheet


Getting started with SaltStack is not that hard, but working on complex environments and live
production sites will relatively complicate your work. Start your own SaltStack cheat sheet where
you will take notes every time you learn a new thing.
I started a Github repository40 with some useful commands this could help you to start your own
cheat sheet, just fork and contribute.

The Usual Lecture From The Local System


Administrator
The famous sudo warning is the first best practice you should learn as a SysAdmin or a DevOps.

We trust you have received the usual lecture from the local System Administrator. It
usually boils > down to these three things: #1) Respect the privacy of others. #2) Think
before you type. #3) With great power comes great responsibility.

When working with automation and remote execution tools, operations becomes easier than before
but risky : You would like to remove /data from a specific server (myNode) but you forgot to target
it and put ‘’ instead of *‘myNode’.

1 salt '*' cmd.run 'rm /data'

This is irreversible, you removed /data directory from all your servers. When working with states
remember to test before making changes. SaltStack has a test interface that will simulate the
execution of a state and report the same result:

40
https://github.com/eon01/SaltStackCheatSheet
Best Practices 194

1 salt '*' state.highstate test=True


2 salt '*' state.sls test=True
3 salt '*' state.single test=True

Updates, Upgrades And Backups


On an Github opened issue41 a user having 4 environments (“base”, “qa”, “dev”, “master”) defined in
*file_roots” discovered that the documentation describing that

If an environment has info in top.sls in base, it will always override all other top.sls.

Was not describing the real case : When checking the code he found that there is no distinction
between base and any other name, and no specific decision on ordering that seems alphabetical

Even though in the top.sls and file_roots, “qa” is second down, because it is last
alphabetically, that overrides all other top.sls files in all other environments.

The documentation is clear but the code seems to do another thing: merge and overwrite in
alphabetical order.
The issue was opened on May 2, 2014 and it was fixed. Even if this issue has not any effect on my
servers, but problems can happen. If there are any recommendations to say here, it is actually general
recommendation that every SysAdmin should know like:

• Make sure to have the last stable updates and upgrades


• Backup, backup, backup and test your backups
• Document everything
• Have B plans for everything
• ..etc

Following Official Guidelines


You can find on this page42 a list of recommendations and best practices to follow when working with
SaltStack. I strongly recommend to follow this guide since it is edited collaboratively on Github43
and maintained officially by SaltStackInc.
The official guide include general rules and recommendations about:
41
https://github.com/saltstack/salt/issues/12483
42
https://docs.saltstack.com/en/latest/topics/best_practices.html
43
https://github.com/saltstack/salt/blob/develop/doc/topics/best_practices.rst
Best Practices 195

• Structuring States and Formulas


• Structuring Pillar Files
• Variable Flexibility
• Modularity Within States
• Storing Secure Data
Community And Support

SaltStack Source Code On Github


SaltStack is an Open Source software provided under the Apache 2.0 license.
Like most of Open Source and Free Softwares, Salt has the advantages of auditability, cost, freedom,
flexibility and also its active community and contributors.
One of the other advantages of using Open Source Software in general and specifically Salt is the
freedom to access and customize its source code. Salt source code is available on Github44 .
Currently, the project has more than 2k forks and 1235 contributors. It was starred by more than 5k
Github user.

SaltStack contributors on Github

SaltStack documentation is increasingly becoming richer and the number of blogs and tutorials
about it is increasing. If you started using SaltStack since 1 or 2 years you have probably noticed the
significant increase of online resources.
If you need to ask questions or need the community support, you can use Github, Salt IRC channel,
the official Google group and StackExchange (generally StackOverflow).
If you are looking for news, updates and speeches, you can subscribe to the official Salt blog and its
YouTube channel.
44
https://github.com/saltstack/salt
Community And Support 197

Official Resources
This is the full list of the official on-line resources:

• Github45 : The official github git repository where you can find the source code, follow the
issues and of course contribute ..etc
• Documentation46 : The official SaltStack documentation. This is may be the most helpful on-
line resource for SaltStack users.
• IRC channel47 : The official IRC channel where you can find volunteers that may have answers
for your questions.
• IRC logs48 : Where you can find all of the IRC discussions history.
• Google discussion group49 : The official discussion group and mailing list. Actually more than
5K topics were opened. You may get a good quality support as many of SaltStack users are
using this online ressource.
• Youtube channel50 : The official Youtube channel where you can find news, speeches and some
tutorials.
• SaltStack blog51 : The official blog.
• SaltConf52 : A website dedicated for the Salt conference (speeches, trainings, certifications).
• Linkedin group53 : The official Linkedin discussion group where you can share links, ask
questions and find good quality tutorials and blog posts.
• SaltStack events54 : A dedicated page for official events and events organised by partners.

Community Ressources
You can visit the community page55 where you can find the resources provided above and maybe
some other useful links.
SaltStack Reddit56 is a also a very good place where you can find useful links and where you can ask
your questions and contribute to the community. This reddit has more than 1k users and it’s getting
more and more poupular everyday.
45
https://github.com/saltstack/salt
46
http://docs.saltstack.com
47
http://webchat.freenode.net/?channels=salt
48
http://irclog.perlgeek.de/salt/
49
https://groups.google.com/forum/#!forum/salt-users
50
http://www.youtube.com/saltstack
51
http://www.saltstack.com/salt-blog
52
http://saltconf.com/
53
https://www.linkedin.com/groups/4877160
54
http://saltstack.com/events/
55
http://saltstack.com/community/
56
https://www.reddit.com/r/SaltStack
Community And Support 198

Meet-ups Around SaltStack


You can also join one of these community discussion and meet-up groups around SaltStack:

• Amsterdam57
• Atlanta58
• Austin59
• Birmingham, AL60
• Bucharest61
• Charlotte62
• Chicago63
• China64
• Dallas65
• Denver66
• Durham, NC67
• Germany68
• The SaltStack LinkedIn group69
• London70
• Los Angeles71
• Michigan72
• Minneapolis73
• Montreal74
• Nashville75
• New York City76
57
http://www.meetup.com/Amsterdam-Salt-Stack/
58
http://www.meetup.com/Atlanta-SaltStack-Meetup/
59
http://www.meetup.com/Austin-Saltstack-User-Group/
60
http://www.meetup.com/Birmingham-SaltStack-Users-Group/
61
http://www.meetup.com/Bucharest-SaltStack-Meetup/
62
http://www.meetup.com/Charlotte-SaltStack-Meetup/
63
http://www.meetup.com/SaltStack-Chicago/
64
http://saltstack.cn/
65
http://www.meetup.com/SaltStack-Dallas-Meetup/
66
http://www.meetup.com/Denver-SaltStack-Users-Group
67
http://www.meetup.com/TriSalt/
68
mailto:[email protected]
69
https://www.linkedin.com/groups?home=&gid=4877160
70
http://www.meetup.com/SaltStack-user-group-London
71
http://www.meetup.com/SaltStack-Los-Angeles-Meetup/
72
http://www.meetup.com/AWS-Michigan/
73
http://www.meetup.com/SaltStack-Minneapolis-Meetup/
74
http://www.meetup.com/Montreal-SaltStack-Meetup/
75
http://www.meetup.com/Nashville-SaltStack-Users-Group/
76
http://www.meetup.com/SaltStack-NYC/
Community And Support 199

• Paris77
• Portland78
• Salt Lake City79
• San Francisco80
• Sao Paulo81
• Seattle82
• Silicon Valley83
• Stockholm84
• Sydney85
• Washington DC86
• Zurich87

SaltStackInc
SaltStackInc is based at 3400 N. Ashton Blvd, Suite 110 Lehi, UT 84043 and they can be contacted by
phone +1 801.207.7440 or by email [email protected]

SaltStackInc on Google Maps

SaltStack offers also paid consulting and support services to SaltStack Enterprise customers, training
session, on-site trainings, quick-start workshops certifications (SaltStack Certified Engineer).
77
http://www.meetup.com/Paris-Salt-Meetup/events/221368010/
78
http://www.meetup.com/Portland-SaltStack-Users-Group/
79
http://www.meetup.com/SaltStack-user-group-Salt-Lake-City/
80
http://www.meetup.com/Salt-Stack-DevOps/
81
http://www.meetup.com/saltstackbrasil/
82
http://www.meetup.com/Seattle-SaltStack-Meetup/
83
http://www.meetup.com/SaltStack-user-group-Silicon-Valley/
84
http://www.meetup.com/SaltStack-Stockholm/
85
http://www.meetup.com/Sydney-SaltStack-User-Group/
86
http://www.meetup.com/NoVA-Saltstack/
87
http://www.meetup.com/Zurich-SaltStack/
Afterword

DevOps is a culture that aims to improve the process of development, integration and deployment
with more collaboration and transparency between the Dev and the Ops teams.
SaltStack is a comprehensive tool that also allow you to simplify your infrastructure manipulation
and master the flow between multiple environments (development, integration, staging and pro-
duction).
In these environments, we use other software such as Vagrant, Docker, Jenkins, Rundeck..etc
SaltStack will allow you to interface with many of these softwares to be present throughout your
“DevOps Toolchain”.
I hope that SaltStack For DevOps has brought you what you were looking for.

Also By Aymen El Amri The Jumpstart Up : https://leanpub.com/the-jumpstart-up


Credits

• DevOps as the intersection of development (software engineering), technology operations and


quality assurance (QA) is a “Creative Commons derivative work88 . This file was derived from
Devops.png” by Rajiv.Pant and licensed under CC BY 3.0.
• Salt Crystals (Cover) is a “Creative Commons work89 by Mschel and licensed under CC BY
3.0.
• Comparison of open-source configuration management software is Creative Commons work90
licensed under CC BY 3.0.

88
https://en.wikipedia.org/wiki/DevOps#/media/File:Devops.svg
89
https://commons.wikimedia.org/wiki/File:Salt_Crystals.JPG
90
https://en.wikipedia.org/wiki/Comparison_of_open-source_configuration_management_software

You might also like