Provided by: salt-doc_2018.3.4~git20180207+dfsg1-1_all bug

NAME

       salt - Salt Documentation

INTRODUCTION TO SALT

       We’re not just talking about NaCl..SS The 30 second summary

       Salt is:

       · a configuration management system, capable of maintaining remote nodes in defined states
         (for example, ensuring that specific packages are installed and  specific  services  are
         running)

       · a  distributed remote execution system used to execute commands and query data on remote
         nodes, either individually or by arbitrary selection criteria

       It was developed in order to bring the  best  solutions  found  in  the  world  of  remote
       execution  together  and  make  them better, faster, and more malleable. Salt accomplishes
       this through its ability to handle large loads of information, and  not  just  dozens  but
       hundreds  and even thousands of individual servers quickly through a simple and manageable
       interface.

   Simplicity
       Providing versatility between massive scale  deployments  and  smaller  systems  may  seem
       daunting,  but  Salt  is very simple to set up and maintain, regardless of the size of the
       project. The architecture of Salt is designed to work with any number of servers,  from  a
       handful  of  local  network  systems  to  international  deployments across different data
       centers. The topology is a simple server/client model with the needed functionality  built
       into  a single set of daemons. While the default configuration will work with little to no
       modification, Salt can be fine tuned to meet specific needs.

   Parallel execution
       The core functions of Salt:

       · enable commands to remote systems to be called in parallel rather than serially

       · use a secure and encrypted protocol

       · use the smallest and fastest network payloads possible

       · provide a simple programming interface

       Salt also introduces more granular controls to the realm  of  remote  execution,  allowing
       systems to be targeted not just by hostname, but also by system properties.

   Builds on proven technology
       Salt  takes  advantage of a number of technologies and techniques. The networking layer is
       built with the excellent ZeroMQ networking library, so the Salt daemon includes  a  viable
       and  transparent  AMQ  broker.  Salt  uses  public keys for authentication with the master
       daemon, then uses faster AES encryption  for  payload  communication;  authentication  and
       encryption  are  integral  to  Salt.   Salt  takes advantage of communication via msgpack,
       enabling fast and light network traffic.

   Python client interface
       In order to allow for simple expansion, Salt execution routines can be  written  as  plain
       Python  modules.  The  data  collected from Salt executions can be sent back to the master
       server, or to any arbitrary program. Salt can be called from a simple Python API, or  from
       the  command line, so that Salt can be used to execute one-off commands as well as operate
       as an integral part of a larger application.

   Fast, flexible, scalable
       The result is a system that can execute commands at high speed  on  target  server  groups
       ranging  from  one  to  very  many  servers.  Salt is very fast, easy to set up, amazingly
       malleable and provides a single remote execution architecture that can manage the  diverse
       requirements  of  any number of servers.  The Salt infrastructure brings together the best
       of the remote execution world, amplifies its capabilities and expands its range, resulting
       in a system that is as versatile as it is practical, suitable for any network.

   Open
       Salt  is  developed under the Apache 2.0 license, and can be used for open and proprietary
       projects. Please submit your expansions back to the  Salt  project  so  that  we  can  all
       benefit together as Salt grows.  Please feel free to sprinkle Salt around your systems and
       let the deliciousness come forth.

   Salt Community
       Join the Salt!

       There are many ways to participate in and communicate with the Salt community.

       Salt has an active IRC channel and a mailing list.

   Mailing List
       Join the salt-users mailing list. It is the best place to ask questions about Salt and see
       whats going on with Salt development! The Salt mailing list is hosted by Google Groups. It
       is open to new members.

   IRC
       The #salt IRC channel is hosted on the popular Freenode network. You can use the  Freenode
       webchat client right from your browser.

       Logs of the IRC channel activity are being collected courtesy of Moritz Lenz.

       If you wish to discuss the development of Salt itself join us in #salt-devel.

   Follow on Github
       The  Salt  code  is  developed  via  Github.  Follow  Salt for constant updates on what is
       happening in Salt development:

       https://github.com/saltstack/salt

   Blogs
       SaltStack Inc. keeps a blog with recent news and advancements:

       http://www.saltstack.com/blog/

   Example Salt States
       The official salt-states repository is: https://github.com/saltstack/salt-states

       A few examples of salt states from the community:

       · https://github.com/blast-hardcheese/blast-salt-states

       · https://github.com/kevingranade/kevingranade-salt-state

       · https://github.com/uggedal/states

       · https://github.com/mattmcclean/salt-openstack/tree/master/salt

       · https://github.com/rentalita/ubuntu-setup/

       · https://github.com/brutasse/states

       · https://github.com/bclermont/states

       · https://github.com/pcrews/salt-data

   Follow on ohloh
       https://www.ohloh.net/p/salt

   Other community links
       · Salt Stack Inc.

       · Subreddit

       · Google+

       · YouTube

       · Facebook

       · Twitter

       · Wikipedia page

   Hack the Source
       If you want to get involved with the development  of  source  code  or  the  documentation
       efforts, please review the contributing documentation!

INSTALLATION

       This section contains instructions to install Salt. If you are setting up your environment
       for the first time, you should install a Salt master on a dedicated management  server  or
       VM,  and then install a Salt minion on each system that you want to manage using Salt. For
       now you don’t need to worry about your architecture, you can  easily  add  components  and
       modify your configuration later without needing to reinstall anything.

       The general installation process is as follows:

       1. Install  a  Salt master using the instructions for your platform or by running the Salt
          bootstrap script. If you use the bootstrap script, be sure to include the -M option  to
          install the Salt master.

       2. Make sure that your Salt minions can find the Salt master.

       3. Install the Salt minion on each system that you want to manage.

       4. Accept the Salt minion keys after the Salt minion connects.

       After  this,  you  should  be  able  to  run a simple command and receive returns from all
       connected Salt minions.

          salt '*' test.ping

   Quick Install
       On most distributions, you can set up a Salt Minion with the Salt bootstrap.

   Platform-specific Installation Instructions
       These guides go into detail how to install Salt on a given platform.

   Arch Linux
   Installation
       Salt (stable) is currently available via the Arch Linux Official repositories.  There  are
       currently -git packages available in the Arch User repositories (AUR) as well.

   Stable Release
       Install Salt stable releases from the Arch Linux Official repositories as follows:

          pacman -S salt

   Tracking develop
       To  install  the  bleeding edge version of Salt (may include bugs!), use the -git package.
       Installing the -git package as follows:

          wget https://aur.archlinux.org/packages/sa/salt-git/salt-git.tar.gz
          tar xf salt-git.tar.gz
          cd salt-git/
          makepkg -is

       NOTE:
          yaourt

          If a tool such as  Yaourt  is  used,  the  dependencies  will  be  gathered  and  built
          automatically.

          The command to install salt using the yaourt tool is:

              yaourt salt-git

   Post-installation tasks
       systemd

       Activate the Salt Master and/or Minion via systemctl as follows:

          systemctl enable salt-master.service
          systemctl enable salt-minion.service

       Start the Master

       Once  you’ve  completed  all  of  these  steps you’re ready to start your Salt Master. You
       should be able to start your Salt Master now using the command seen here:

          systemctl start salt-master

       Now go to the Configuring Salt page.

   Debian GNU/Linux / Raspbian
       Debian GNU/Linux distribution and some derivatives such as Raspbian already have  included
       Salt  packages  to their repositories. However, current stable Debian release contains old
       outdated Salt releases. It is recommended  to  use  SaltStack  repository  for  Debian  as
       described below.

       Installation from official Debian and Raspbian repositories is described here.

   Installation from the Official SaltStack Repository
       Packages  for  Debian  9  (Stretch)  and  Debian  8 (Jessie) are available in the Official
       SaltStack repository.

       Instructions are at https://repo.saltstack.com/#debian.

       NOTE:
          Regular security support for Debian 7 ended on April 25th 2016. As a  result,  2016.3.1
          and 2015.8.10 will be the last Salt releases for which Debian 7 packages are created.

   Installation from the Debian / Raspbian Official Repository
       The  Debian  distributions contain mostly old Salt packages built by the Debian Salt Team.
       You can install Salt components directly from Debian but it  is  recommended  to  use  the
       instructions above for the packages from the official Salt repository.

       On  Jessie  there  is  an  option  to install Salt minion from Stretch with python-tornado
       dependency from jessie-backports repositories.

       To install fresh release of Salt minion on Jessie:

       1. Add jessie-backports and stretch repositories:

          Debian:

             echo 'deb http://httpredir.debian.org/debian jessie-backports main' >> /etc/apt/sources.list
             echo 'deb http://httpredir.debian.org/debian stretch main' >> /etc/apt/sources.list

          Raspbian:

             echo 'deb http://archive.raspbian.org/raspbian/ stretch main' >> /etc/apt/sources.list

       2. Make Jessie a default release:

             echo 'APT::Default-Release "jessie";' > /etc/apt/apt.conf.d/10apt

       3. Install Salt dependencies:

          Debian:

             apt-get update
             apt-get install python-zmq python-systemd/jessie-backports python-tornado/jessie-backports salt-common/stretch

          Raspbian:

             apt-get update
             apt-get install python-zmq python-tornado/stretch salt-common/stretch

       4. Install Salt minion package from Latest Debian Release:

             apt-get install salt-minion/stretch

   Install Packages
       Install the Salt master, minion or other packages from the  repository  with  the  apt-get
       command.  These  examples  each  install one of Salt components, but more than one package
       name may be given at a time:

       · apt-get install salt-api

       · apt-get install salt-cloud

       · apt-get install salt-master

       · apt-get install salt-minion

       · apt-get install salt-ssh

       · apt-get install salt-syndic

   Post-installation tasks
       Now, go to the Configuring Salt page.

   Arista EOS Salt minion installation guide
       The Salt minion for Arista EOS is distributed as a SWIX extension  and  can  be  installed
       directly  on  the  switch.  The  EOS  network  operating  system  is  based  on old Fedora
       distributions and the installation  of  the  salt-minion  requires  backports.  This  SWIX
       extension contains the necessary backports, together with the Salt basecode.

       NOTE:
          This  SWIX extension has been tested on Arista DCS-7280SE-68-R, running EOS 4.17.5M and
          vEOS 4.18.3F.

   Important Notes
       This package is in beta, make sure to test it carefully before running it in production.

       If confirmed working correctly, please report and  add  a  note  on  this  page  with  the
       platform model and EOS version.

       If you want to uninstall this package, please refer to the uninstalling section.

   Installation from the Official SaltStack Repository
       Download the swix package and save it to flash.

          veos#copy https://salt-eos.netops.life/salt-eos-latest.swix flash:
          veos#copy https://salt-eos.netops.life/startup.sh flash:

   Install the Extension
       Copy the Salt package to extension

          veos#copy flash:salt-eos-latest.swix extension:

       Install the SWIX

          veos#extension salt-eos-latest.swix force

       Verify the installation

          veos#show extensions | include salt-eos
               salt-eos-2017-07-19.swix      1.0.11/1.fc25        A, F                27

       Change the Salt master IP address or FQDN, by edit the variable (SALT_MASTER)

          veos#bash vi /mnt/flash/startup.sh

       Make sure you enable the eAPI with unix-socket

          veos(config)#management api http-commands
                   protocol unix-socket
                   no shutdown

   Post-installation tasks
       Generate Keys and host record and start Salt minion

          veos#bash
          #sudo /mnt/flash/startup.sh

       salt-minion should be running

       Copy the installed extensions to boot-extensions

          veos#copy installed-extensions boot-extensions

       Apply event-handler to let EOS start salt-minion during boot-up

          veos(config)#event-handler boot-up-script
             trigger on-boot
             action bash sudo /mnt/flash/startup.sh

       For  more  specific  installation  details of the salt-minion, please refer to Configuring
       Salt.

   Uninstalling
       If you decide to uninstall this package, the following steps are recommended for safety:

       1. Remove the extension from boot-extensions

          veos#bash rm /mnt/flash/boot-extensions

       2. Remove the extension from extensions folder

          veos#bash rm /mnt/flash/.extensions/salt-eos-latest.swix

       2. Remove boot-up script

          veos(config)#no event-handler boot-up-script

   Additional Information
       This SWIX extension contains the following RPM packages:

          libsodium-1.0.11-1.fc25.i686.rpm
          libstdc++-6.2.1-2.fc25.i686.rpm
          openpgm-5.2.122-6.fc24.i686.rpm
          python-Jinja2-2.8-0.i686.rpm
          python-PyYAML-3.12-0.i686.rpm
          python-babel-0.9.6-5.fc18.noarch.rpm
          python-backports-1.0-3.fc18.i686.rpm
          python-backports-ssl_match_hostname-3.4.0.2-1.fc18.noarch.rpm
          python-backports_abc-0.5-0.i686.rpm
          python-certifi-2016.9.26-0.i686.rpm
          python-chardet-2.0.1-5.fc18.noarch.rpm
          python-crypto-1.4.1-1.noarch.rpm
          python-crypto-2.6.1-1.fc18.i686.rpm
          python-futures-3.1.1-1.noarch.rpm
          python-jtextfsm-0.3.1-0.noarch.rpm
          python-kitchen-1.1.1-2.fc18.noarch.rpm
          python-markupsafe-0.18-1.fc18.i686.rpm
          python-msgpack-python-0.4.8-0.i686.rpm
          python-napalm-base-0.24.3-1.noarch.rpm
          python-napalm-eos-0.6.0-1.noarch.rpm
          python-netaddr-0.7.18-0.noarch.rpm
          python-pyeapi-0.7.0-0.noarch.rpm
          python-salt-2017.7.0_1414_g2fb986f-1.noarch.rpm
          python-singledispatch-3.4.0.3-0.i686.rpm
          python-six-1.10.0-0.i686.rpm
          python-tornado-4.4.2-0.i686.rpm
          python-urllib3-1.5-7.fc18.noarch.rpm
          python2-zmq-15.3.0-2.fc25.i686.rpm
          zeromq-4.1.4-5.fc25.i686.rpm

   Fedora
       Beginning with version 0.9.4, Salt has been available in the primary  Fedora  repositories
       and EPEL. It is installable using yum or dnf, depending on your version of Fedora.

       NOTE:
          Released  versions  of  Salt starting with 2015.5.2 through 2016.3.2 do not have Fedora
          packages available though EPEL. To install a version of Salt within this release array,
          please use SaltStack’s Bootstrap Script and use the git method of installing Salt using
          the version’s associated release tag.

          Release 2016.3.3 and onward will have packaged versions available via EPEL.

       WARNING: Fedora 19 comes with  systemd  204.   Systemd  has  known  bugs  fixed  in  later
       revisions  that  prevent  the  salt-master  from  starting reliably or opening the network
       connections that it needs to.  It’s not likely  that  a  salt-master  will  start  or  run
       reliably  on  any  distribution  that  uses  systemd  version  204  or  earlier.   Running
       salt-minions should be OK.

   Installation
       Salt can be installed using yum and is available in the standard Fedora repositories.

   Stable Release
       Salt is packaged separately for the minion and the master. It is necessary only to install
       the  appropriate  package for the role the machine will play. Typically, there will be one
       master and multiple minions.

          yum install salt-master
          yum install salt-minion

   Installing from updates-testing
       When a new Salt release is  packaged,  it  is  first  admitted  into  the  updates-testing
       repository, before being moved to the stable repo.

       To install from updates-testing, use the enablerepo argument for yum:

          yum --enablerepo=updates-testing install salt-master
          yum --enablerepo=updates-testing install salt-minion

   Installation Using pip
       Since  Salt is on PyPI, it can be installed using pip, though most users prefer to install
       using a package manager.

       Installing from pip has a few additional requirements:

       · Install the group ‘Development Tools’, dnf groupinstall 'Development Tools'

       · Install the ‘zeromq-devel’ package if it fails on linking  against  that  afterwards  as
         well.

       A pip install does not make the init scripts or the /etc/salt directory, and you will need
       to provide your own systemd service unit.

       Installation from pip:

          pip install salt

       WARNING:
          If installing from pip (or from source using setup.py install),  be  advised  that  the
          yum-utils  package  is  needed  for  Salt  to  manage  packages.  Also,  if  the Python
          dependencies are not already installed, then you will need  additional  libraries/tools
          installed to build some of them.  More information on this can be found here.

   Post-installation tasks
       Master

       To have the Master start automatically at boot time:

          systemctl enable salt-master.service

       To start the Master:

          systemctl start salt-master.service

       Minion

       To have the Minion start automatically at boot time:

          systemctl enable salt-minion.service

       To start the Minion:

          systemctl start salt-minion.service

       Now go to the Configuring Salt page.

   FreeBSD
   Installation
       Salt  is  available  in  binary  package  form  from  both the FreeBSD pkgng repository or
       directly from SaltStack. The instructions below outline installation via both methods:

   FreeBSD repo
       The FreeBSD pkgng repository is preconfigured on systems 10.x and above. No  configuration
       is needed to pull from these repositories.

          pkg install py27-salt

       These packages are usually available within a few days of upstream release.

   SaltStack repo
       SaltStack  also  hosts  internal  binary  builds  of  the  Salt  package,  available  from
       https://repo.saltstack.com/freebsd/. To make use of this  repository,  add  the  following
       file to your system:

       /usr/local/etc/pkg/repos/saltstack.conf:

          saltstack: {
            url: "https://repo.saltstack.com/freebsd/${ABI}/",
            enabled: yes
          }

       You should now be able to install Salt from this new repository:

          pkg install py27-salt

       These  packages  are  usually  available earlier than upstream FreeBSD. Also available are
       release candidates and development releases. Use these pre-release packages with caution.

   Post-installation tasks
       Master

       Copy the sample configuration file:

          cp /usr/local/etc/salt/master.sample /usr/local/etc/salt/master

       rc.conf

       Activate the Salt Master in /etc/rc.conf:

          sysrc salt_master_enable="YES"

       Start the Master

       Start the Salt Master as follows:

          service salt_master start

       Minion

       Copy the sample configuration file:

          cp /usr/local/etc/salt/minion.sample /usr/local/etc/salt/minion

       rc.conf

       Activate the Salt Minion in /etc/rc.conf:

          sysrc salt_minion_enable="YES"

       Start the Minion

       Start the Salt Minion as follows:

          service salt_minion start

       Now go to the Configuring Salt page.

   Gentoo
       Salt can be easily installed on Gentoo via Portage:

          emerge app-admin/salt

   Post-installation tasks
       Now go to the Configuring Salt page.

   OpenBSD
       Salt was added to the OpenBSD ports tree on Aug 10th 2013.  It has been tested on  OpenBSD
       5.5 onwards.

       Salt  is  dependent  on  the  following  additional  ports.  These  will  be  installed as
       dependencies of the sysutils/salt port:

          devel/py-futures
          devel/py-progressbar
          net/py-msgpack
          net/py-zmq
          security/py-crypto
          security/py-M2Crypto
          textproc/py-MarkupSafe
          textproc/py-yaml
          www/py-jinja2
          www/py-requests
          www/py-tornado

   Installation
       To install Salt from the OpenBSD pkg repo, use the command:

          pkg_add salt

   Post-installation tasks
       Master

       To have the Master start automatically at boot time:

          rcctl enable salt_master

       To start the Master:

          rcctl start salt_master

       Minion

       To have the Minion start automatically at boot time:

          rcctl enable salt_minion

       To start the Minion:

          rcctl start salt_minion

       Now go to the Configuring Salt page.

   macOS
   Installation from the Official SaltStack Repository
       Latest stable build from the selected branch:

       The output of md5 <salt pkg> should match the contents of the corresponding md5 file.

       Earlier builds from supported branches

       Archived builds from unsupported branches

   Installation from Homebrew
          brew install saltstack

       It should be noted that Homebrew explicitly discourages the use of sudo:
          Homebrew is designed to work without using sudo. You  can  decide  to  use  it  but  we
          strongly  recommend  not  to do so. If you have used sudo and run into a bug then it is
          likely to be the cause. Please don’t file a bug report  unless  you  can  reproduce  it
          after reinstalling Homebrew from scratch without using sudo

   Installation from MacPorts
          sudo port install salt

   Installation from Pip
       When only using the macOS system’s pip, install this way:

          sudo pip install salt

   Salt-Master Customizations
       NOTE:
          Salt  master  on  macOS is not tested or supported by SaltStack. See SaltStack Platform
          Support for more information.

       To run salt-master on macOS, sudo add this configuration option  to  the  /etc/salt/master
       file:

          max_open_files: 8192

       On versions previous to macOS 10.10 (Yosemite), increase the root user maxfiles limit:

          sudo launchctl limit maxfiles 4096 8192

       NOTE:
          On  macOS  10.10  (Yosemite)  and  higher, maxfiles should not be adjusted. The default
          limits are sufficient in all but the most extreme scenarios.  Overriding  these  values
          with the setting below will cause system instability!

       Now the salt-master should run without errors:

          sudo salt-master --log-level=all

   Post-installation tasks
       Now go to the Configuring Salt page.

   RHEL / CentOS / Scientific Linux / Amazon Linux / Oracle Linux
       Salt  should  work  properly  with all mainstream derivatives of Red Hat Enterprise Linux,
       including CentOS, Scientific Linux, Oracle Linux, and Amazon Linux.  Report  any  bugs  or
       issues on the issue tracker.

   Installation from the Official SaltStack Repository
       Packages for Redhat, CentOS, and Amazon Linux are available in the SaltStack Repository.

       · Red Hat / CentOS

       · Amazon Linux

       NOTE:
          As  of  2015.8.0, EPEL repository is no longer required for installing on RHEL systems.
          SaltStack repository provides all needed dependencies.

       WARNING:
          If installing on Red Hat Enterprise Linux 7 with disabled  (not  subscribed  on)  ‘RHEL
          Server  Releases’  or  ‘RHEL Server Optional Channel’ repositories, append CentOS 7 GPG
          key URL to SaltStack yum repository configuration to install required base packages:

              [saltstack-repo]
              name=SaltStack repo for Red Hat Enterprise Linux $releasever
              baseurl=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest
              enabled=1
              gpgcheck=1
              gpgkey=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-GPG-KEY.pub
                     https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/base/RPM-GPG-KEY-CentOS-7

       NOTE:
          systemd and systemd-python are required by Salt, but are not installed by the Red Hat 7
          @base  installation  or  by  the Salt installation. These dependencies might need to be
          installed before Salt.

   Installation from the Community-Maintained Repository
       Beginning with version 0.9.4, Salt has been available in EPEL.

       NOTE:
          Packages in this repository are built by community, and it  can  take  a  little  while
          until the latest stable SaltStack release become available.

   RHEL/CentOS 6 and 7, Scientific Linux, etc.
       WARNING:
          Salt  2015.8  is  currently  not  available  in  EPEL  due to unsatisfied dependencies:
          python-crypto 2.6.1 or higher,  and  python-tornado  version  4.2.1  or  higher.  These
          packages are not currently available in EPEL for Red Hat Enterprise Linux 6 and 7.

   Enabling EPEL
       If  the  EPEL  repository  is  not  installed on your system, you can download the RPM for
       RHEL/CentOS 6 or for RHEL/CentOS 7 and install it using the following command:

          rpm -Uvh epel-release-X-Y.rpm

       Replace epel-release-X-Y.rpm with the appropriate filename.

   Installing Stable Release
       Salt is packaged separately for the minion and the master. It is necessary to install only
       the  appropriate package for the role the machine will play.  Typically, there will be one
       master and multiple minions.

          · yum install salt-master

          · yum install salt-minion

          · yum install salt-ssh

          · yum install salt-syndic

          · yum install salt-cloud

   Installing from epel-testing
       When a new  Salt  release  is  packaged,  it  is  first  admitted  into  the  epel-testing
       repository, before being moved to the stable EPEL repository.

       To install from epel-testing, use the enablerepo argument for yum:

          yum --enablerepo=epel-testing install salt-minion

   Installation Using pip
       Since  Salt is on PyPI, it can be installed using pip, though most users prefer to install
       using RPM packages (which can be installed from EPEL).

       Installing from pip has a few additional requirements:

       · Install the group ‘Development Tools’, yum groupinstall 'Development Tools'

       · Install the ‘zeromq-devel’ package if it fails on linking  against  that  afterwards  as
         well.

       A pip install does not make the init scripts or the /etc/salt directory, and you will need
       to provide your own systemd service unit.

       Installation from pip:

          pip install salt

       WARNING:
          If installing from pip (or from source using setup.py install),  be  advised  that  the
          yum-utils  package  is  needed  for  Salt  to  manage  packages.  Also,  if  the Python
          dependencies are not already installed, then you will need  additional  libraries/tools
          installed to build some of them.  More information on this can be found here.

   ZeroMQ 4
       We  recommend  using  ZeroMQ  4 where available. SaltStack provides ZeroMQ 4.0.5 and pyzmq
       14.5.0 in the SaltStack Repository.

       If this repository is added before Salt is installed, then installing  either  salt-master
       or  salt-minion  will  automatically pull in ZeroMQ 4.0.5, and additional steps to upgrade
       ZeroMQ and pyzmq are unnecessary.

   Package Management
       Salt’s interface to yum makes heavy use of  the  repoquery  utility,  from  the  yum-utils
       package.  This  package  will  be installed as a dependency if salt is installed via EPEL.
       However, if salt has been installed using pip, or a host is being managed using  salt-ssh,
       then  as  of  version  2014.7.0  yum-utils will be installed automatically to satisfy this
       dependency.

   Post-installation tasks
   Master
       To have the Master start automatically at boot time:

       RHEL/CentOS 5 and 6

          chkconfig salt-master on

       RHEL/CentOS 7

          systemctl enable salt-master.service

       To start the Master:

       RHEL/CentOS 5 and 6

          service salt-master start

       RHEL/CentOS 7

          systemctl start salt-master.service

   Minion
       To have the Minion start automatically at boot time:

       RHEL/CentOS 5 and 6

          chkconfig salt-minion on

       RHEL/CentOS 7

          systemctl enable salt-minion.service

       To start the Minion:

       RHEL/CentOS 5 and 6

          service salt-minion start

       RHEL/CentOS 7

          systemctl start salt-minion.service

       Now go to the Configuring Salt page.

   Solaris
       Salt is known to work on Solaris but community packages are unmaintained.

       It is possible to install Salt on Solaris by using setuptools.

       For example, to install the develop version of salt:

          git clone https://github.com/saltstack/salt
          cd salt
          sudo python setup.py install --force

       NOTE:
          SaltStack does offer commercial support for Solaris which includes packages.

   Ubuntu
   Installation from the Official SaltStack Repository
       Packages for Ubuntu 16 (Xenial), Ubuntu 14 (Trusty), and Ubuntu 12 (Precise) are available
       in the SaltStack repository.

       Instructions are at https://repo.saltstack.com/#ubuntu.

   Install Packages
       Install  the  Salt  master,  minion or other packages from the repository with the apt-get
       command. These examples each install one of Salt components, but  more  than  one  package
       name may be given at a time:

       · apt-get install salt-api

       · apt-get install salt-cloud

       · apt-get install salt-master

       · apt-get install salt-minion

       · apt-get install salt-ssh

       · apt-get install salt-syndic

   Post-installation tasks
       Now go to the Configuring Salt page.

   Windows
       Salt  has  full  support  for running the Salt minion on Windows. You must connect Windows
       Salt minions to a Salt master on  a  supported  operating  system  to  control  your  Salt
       Minions.

       Many of the standard Salt modules have been ported to work on Windows and many of the Salt
       States currently work on Windows as well.

   Installation from the Official SaltStack Repository
       Latest stable build from the selected branch:

       The output of md5sum <salt minion exe> should match the contents of the corresponding  md5
       file.

       Earlier builds from supported branches

       Archived builds from unsupported branches

       NOTE:
          The installation executable installs dependencies that the Salt minion requires.

       The  64bit  installer  has been tested on Windows 7 64bit and Windows Server 2008R2 64bit.
       The 32bit installer has been tested on Windows 2008  Server  32bit.   Please  file  a  bug
       report on our GitHub repo if issues for other platforms are found.

       There are installers available for Python 2 and Python 3.

       The  installer  will  detect  previous  installations of Salt and ask if you would like to
       remove them. Clicking OK will remove the Salt binaries and related  files  but  leave  any
       existing config, cache, and PKI information.

   Salt Minion Installation
       If  the  system  is  missing  the  appropriate  version  of the Visual C++ Redistributable
       (vcredist) the user will be prompted to install it. Click  OK  to  install  the  vcredist.
       Click Cancel to abort the installation without making modifications to the system.

       If  Salt  is  already  installed  on  the  system  the user will be prompted to remove the
       previous installation. Click OK to uninstall Salt without removing the configuration,  PKI
       information,  or  cached  files.  Click Cancel to abort the installation before making any
       modifications to the system.

       After the Welcome  and  the  License  Agreement,  the  installer  asks  for  two  bits  of
       information  to  configure  the  minion;  the  master  hostname  and the minion name.  The
       installer will update the minion config with these options.

       If the installer finds an existing minion config file, these fields will be populated with
       values  from  the  existing  config,  but  they  will be grayed out.  There will also be a
       checkbox to use the existing config. If you continue, the existing config will be used. If
       the  checkbox  is  unchecked,  default  values  are  displayed  and can be changed. If you
       continue, the existing config  file  in  c:\salt\conf  will  be  removed  along  with  the
       c:\salt\conf\minion.d directory. The values entered will be used with the default config.

       The  final  page  allows you to start the minion service and optionally change its startup
       type. By default, the minion is set to Automatic. You can change the minion start type  to
       Automatic (Delayed Start) by checking the ‘Delayed Start’ checkbox.

       NOTE:
          Highstates  that  require  a  reboot  may  fail after reboot because salt continues the
          highstate before Windows has finished the  booting  process.   This  can  be  fixed  by
          changing  the  startup type to ‘Automatic (Delayed Start)’. The drawback is that it may
          increase the time it takes for the ‘salt-minion’ service to actually start.

       The salt-minion service will appear in the Windows Service  Manager  and  can  be  managed
       there or from the command line like any other Windows service.

          sc start salt-minion
          net start salt-minion

   Installation Prerequisites
       Most  Salt  functionality  should  work just fine right out of the box. A few Salt modules
       rely on PowerShell. The minimum version of PowerShell required for Salt is version  3.  If
       you intend to work with DSC then Powershell version 5 is the minimum.

   Silent Installer Options
       The  installer  can  be  run  silently by providing the /S option at the command line. The
       installer also accepts the following options for configuring the Salt Minion silently:

                       ┌──────────────────────┬──────────────────────────────────┐
                       │Option                │ Description                      │
                       ├──────────────────────┼──────────────────────────────────┤
                       │/master=              │ A string value  to  set  the  IP │
                       │                      │ address   or   hostname  of  the │
                       │                      │ master. Default value is ‘salt’. │
                       │                      │ You  can pass a single master or │
                       │                      │ a   comma-separated   list    of │
                       │                      │ masters.    Setting  the  master │
                       │                      │ will cause the installer to  use │
                       │                      │ the  default  config or a custom │
                       │                      │ config if defined.               │
                       ├──────────────────────┼──────────────────────────────────┤
                       │/minion-name=         │ A string value to set the minion │
                       │                      │ name.     Default    value    is │
                       │                      │ ‘hostname’. Setting  the  minion │
                       │                      │ name causes the installer to use │
                       │                      │ the default config or  a  custom │
                       │                      │ config if defined.               │
                       ├──────────────────────┼──────────────────────────────────┤
                       │/start-minion=        │ Either  a 1 or 0. ‘1’ will start │
                       │                      │ the  salt-minion  service,   ‘0’ │
                       │                      │ will  not.  Default  is to start │
                       │                      │ the service after installation.  │
                       ├──────────────────────┼──────────────────────────────────┤
                       │/start-minion-delayed │ Set the  minion  start  type  to │
                       │                      │ Automatic (Delayed Start).       │
                       ├──────────────────────┼──────────────────────────────────┤
                       │/default-config       │ Overwrite the existing config if │
                       │                      │ present with the default  config │
                       │                      │ for  salt. Default is to use the │
                       │                      │ existing config if  present.  If │
                       │                      │ /master  and/or  /minion-name is │
                       │                      │ passed,  those  values  will  be │
                       │                      │ used  to  update the new default │
                       │                      │ config.                          │
                       └──────────────────────┴──────────────────────────────────┘

                       │/custom-config=       │ A string  value  specifying  the │
                       │                      │ name  of a custom config file in │
                       │                      │ the same path as  the  installer │
                       │                      │ of  the  full  path  to a custom │
                       │                      │ config file. If  /master  and/or │
                       │                      │ /minion-name  is  passed,  those │
                       │                      │ values will be  used  to  update │
                       │                      │ the new custom config.           │
                       ├──────────────────────┼──────────────────────────────────┤
                       │/S                    │ Runs  the installation silently. │
                       │                      │ Uses the above settings  or  the │
                       │                      │ defaults.                        │
                       ├──────────────────────┼──────────────────────────────────┤
                       │/?                    │ Displays command line help.      │
                       └──────────────────────┴──────────────────────────────────┘

       NOTE:
          /start-service  has  been  deprecated but will continue to function as expected for the
          time being.

       NOTE:
          /default-config and  /custom-config=  will  backup  an  existing  config  if  found.  A
          timestamp  and  a  .bak  extension will be added. That includes the minion file and the
          minion.d directory.

       Here are some examples of using the silent installer:

          # Install the Salt Minion
          # Configure the minion and start the service

          Salt-Minion-2017.7.1-Py2-AMD64-Setup.exe /S /master=yoursaltmaster /minion-name=yourminionname

          # Install the Salt Minion
          # Configure the minion but don't start the minion service

          Salt-Minion-2017.7.1-Py3-AMD64-Setup.exe /S /master=yoursaltmaster /minion-name=yourminionname /start-minion=0

          # Install the Salt Minion
          # Configure the minion using a custom config and configuring multimaster

          Salt-Minion-2017.7.1-Py3-AMD64-Setup.exe /S /custom-config=windows_minion /master=prod_master1,prod_master2

   Running the Salt Minion on Windows as an Unprivileged User
       Notes:

       · These instructions were tested with Windows Server 2008 R2

       · They are generalizable to any version of Windows that supports a salt-minion

   Create the Unprivileged User that the Salt Minion will Run As
       1.  Click Start > Control Panel > User Accounts.

       2.  Click Add or remove user accounts.

       3.  Click Create new account.

       4.  Enter salt-user (or a name of your preference) in the New account name field.

       5.  Select the Standard user radio button.

       6.  Click the Create Account button.

       7.  Click on the newly created user account.

       8.  Click the Create a password link.

       9.  In the New  password  and  Confirm  new  password  fields,  provide  a  password  (e.g
           “SuperSecretMinionPassword4Me!”).

       10. In the Type a password hint field, provide appropriate text (e.g. “My Salt Password”).

       11. Click the Create password button.

       12. Close the Change an Account window.

   Add the New User to the Access Control List for the Salt Folder
       1. In a File Explorer window, browse to the path where Salt is installed (the default path
          is C:\Salt).

       2. Right-click on the Salt folder and select Properties.

       3. Click on the Security tab.

       4. Click the Edit button.

       5. Click the Add button.

       6. Type the name of your designated Salt user and click the OK button.

       7. Check the box to Allow the Modify permission.

       8. Click the OK button.

       9. Click the OK button to close the Salt Properties window.

   Update the Windows Service User for the salt-minion Service
       1.  Click Start > Administrative Tools > Services.

       2.  In the Services list, right-click on salt-minion and select Properties.

       3.  Click the Log On tab.

       4.  Click the This account radio button.

       5.  Provide the account credentials created in section A.

       6.  Click the OK button.

       7.  Click the OK button to the prompt confirming that the user has been granted the Log On
           As A Service right.

       8.  Click  the  OK  button  to the prompt confirming that The new logon name will not take
           effect until you stop and restart the service.

       9.  Right-Click on salt-minion and select Stop.

       10. Right-Click on salt-minion and select Start.

   Building and Developing on Windows
       This document will explain how to set up a development environment for  Salt  on  Windows.
       The  development  environment  allows you to work with the source code to customize or fix
       bugs. It will also allow you to build your own installation.

       There are several scripts to automate creating a Windows installer as well as  setting  up
       an environment that facilitates developing and troubleshooting Salt code. They are located
       in the pkg\windows directory in the Salt repo (here).

   Scripts:
                          ┌────────────────┬──────────────────────────────────┐
                          │Script          │ Description                      │
                          ├────────────────┼──────────────────────────────────┤
                          │build_env_2.ps1 │ A PowerShell script that sets up │
                          │                │ a Python 2 build environment     │
                          └────────────────┴──────────────────────────────────┘

                          │build_env_3.ps1 │ A PowerShell script that sets up │
                          │                │ a Python 3 build environment     │
                          ├────────────────┼──────────────────────────────────┤
                          │build_pkg.bat   │ A  batch  file  that  builds   a │
                          │                │ Windows  installer  based on the │
                          │                │ contents  of   the   C:\Python27 │
                          │                │ directory                        │
                          ├────────────────┼──────────────────────────────────┤
                          │build.bat       │ A    batch   file   that   fully │
                          │                │ automates the  building  of  the │
                          │                │ Windows   installer   using  the │
                          │                │ above two scripts                │
                          └────────────────┴──────────────────────────────────┘

       NOTE:
          The build.bat and build_pkg.bat scripts both accept a parameter to specify the  version
          of  Salt that will be displayed in the Windows installer.  If no version is passed, the
          version will be determined using git.

          Both scripts also accept an additional parameter to specify the version  of  Python  to
          use. The default is 2.

   Prerequisite Software
       The only prerequisite is Git for Windows.

   Create a Build Environment
   1. Working Directory
       Create  a  Salt-Dev  directory  on  the  root  of  C:. This will be our working directory.
       Navigate to Salt-Dev and clone the Salt repo from GitHub.

       Open a command line and type:

          cd \
          md Salt-Dev
          cd Salt-Dev
          git clone https://github.com/saltstack/salt

       Go into the salt directory and checkout the version  of  salt  to  work  with  (2016.3  or
       higher).

          cd salt
          git checkout 2017.7.2

   2. Setup the Python Environment
       Navigate to the pkg\windows directory and execute the build_env.ps1 PowerShell script.

          cd pkg\windows
          powershell -file build_env_2.ps1

       NOTE:
          You  can  also  do this from Explorer by navigating to the pkg\windows directory, right
          clicking the build_env_2.ps1 powershell script and selecting Run with PowerShell

       This will download and install Python 2 with all the dependencies needed  to  develop  and
       build Salt.

       NOTE:
          If  you  get  an  error or the script fails to run you may need to change the execution
          policy. Open a powershell window and type the following command:

          Set-ExecutionPolicy RemoteSigned

   3. Salt in Editable Mode
       Editable mode allows you to more  easily  modify  and  test  the  source  code.  For  more
       information see the Pip documentation.

       Navigate to the root of the salt directory and install Salt in editable mode with pip

          cd \Salt-Dev\salt
          pip install -e .

       NOTE:
          The . is important

       NOTE:
          If pip is not recognized, you may need to restart your shell to get the updated path

       NOTE:
          If  pip  is  still  not  recognized  make sure that the Python Scripts folder is in the
          System %PATH%. (C:\Python2\Scripts)

   4. Setup Salt Configuration
       Salt requires a minion configuration file and a few other directories. The default  config
       file  is  named  minion located in C:\salt\conf. The easiest way to set this up is to copy
       the contents of the salt\pkg\windows\buildenv directory to C:\salt.

          cd \
          md salt
          xcopy /s /e \Salt-Dev\salt\pkg\windows\buildenv\* \salt\

       Now go into the C:\salt\conf directory and edit the minion config file  named  minion  (no
       extension).  You  need  to  configure  the master and id parameters in this file. Edit the
       following lines:

          master: <ip or name of your master>
          id: <name of your minion>

   Create a Windows Installer
       To create a Windows installer, follow steps 1 and 2 from Create a Build Environment above.
       Then proceed to 3 below:

   3. Install Salt
       To  create the installer for Window we install Salt using Python instead of pip.  Navigate
       to the root salt directory and install Salt.

          cd \Salt-Dev\salt
          python setup.py install

   4. Create the Windows Installer
       Navigate to the pkg\windows directory and run the build_pkg.bat  with  the  build  version
       (2017.7.2) and the Python version as parameters.

          cd pkg\windows
          build_pkg.bat 2017.7.2 2
                        ^^^^^^^^ ^
                            |    |
          # build version --     |
          # python version ------

       NOTE:
          If  no version is passed, the build_pkg.bat will guess the version number using git. If
          the python version is not passed, the default is 2.

   Creating a Windows Installer: Alternate Method (Easier)
       Clone the Salt repo from GitHub into the directory of your  choice.  We’re  going  to  use
       Salt-Dev.

          cd \
          md Salt-Dev
          cd Salt-Dev
          git clone https://github.com/saltstack/salt

       Go into the salt directory and checkout the version of Salt you want to build.

          cd salt
          git checkout 2017.7.2

       Then  navigate  to  pkg\windows  and  run  the  build.bat  script  with the version you’re
       building.

          cd pkg\windows
          build.bat 2017.7.2 3
                    ^^^^^^^^ ^
                        |    |
          # build version    |
          # python version --

       This will install everything needed to build a Windows installer for Salt using Python  3.
       The binary will be in the salt\pkg\windows\installer directory.

   Testing the Salt minion
       1. Create the directory C:\salt (if it doesn’t exist already)

       2.

          Copy the example conf and var directories from
                 pkg\windows\buildenv into C:\salt

       3. Edit C:\salt\conf\minion

                 master: ipaddress or hostname of your salt-master

       4. Start the salt-minion

                 cd C:\Python27\Scripts
                 python salt-minion -l debug

       5. On the salt-master accept the new minion’s key

                 sudo salt-key -A

             This accepts all unaccepted keys. If you’re concerned about security just accept the
             key for this specific minion.

       6. Test that your minion is responding
             On the salt-master run:

                 sudo salt '*' test.ping

       You should get the following response: {'your minion hostname': True}

   Packages Management Under Windows 2003
       Windows Server 2003 and Windows XP have both reached End of Support. Though  Salt  is  not
       officially supported on operating systems that are EoL, some functionality may continue to
       work.

       On Windows Server 2003, you need to install  optional  component  “WMI  Windows  Installer
       Provider”  to  get  a full list of installed packages. If you don’t have this, salt-minion
       can’t report some installed software.

   SUSE
   Installation from the Official SaltStack Repository
       Packages for SUSE 12 SP1, SUSE 12, SUSE  11,  openSUSE  13  and  openSUSE  Leap  42.1  are
       available in the SaltStack Repository.

       Instructions are at https://repo.saltstack.com/#suse.

   Installation from the SUSE Repository
       Since  openSUSE  13.2,  Salt 2014.1.11 is available in the primary repositories.  With the
       release of SUSE manager 3 a new repository setup has been created.  The new repo  will  by
       systemsmanagement:saltstack,  which  is the source for newer stable packages. For backward
       compatibility a linkpackage will be created to the old  devel:language:python  repo.   All
       development  of  suse  packages will be done in systemsmanagement:saltstack:testing.  This
       will ensure that salt will be in mainline suse repo’s, a stable release repo and a testing
       repo for further enhancements.

   Installation
       Salt  can  be  installed  using  zypper  and  is  available  in the standard openSUSE/SLES
       repositories.

   Stable Release
       Salt is packaged separately for the minion and the master. It is necessary only to install
       the  appropriate  package for the role the machine will play. Typically, there will be one
       master and multiple minions.

          zypper install salt-master
          zypper install salt-minion

   Post-installation tasks openSUSE
       Master

       To have the Master start automatically at boot time:

          systemctl enable salt-master.service

       To start the Master:

          systemctl start salt-master.service

       Minion

       To have the Minion start automatically at boot time:

          systemctl enable salt-minion.service

       To start the Minion:

          systemctl start salt-minion.service

   Post-installation tasks SLES
       Master

       To have the Master start automatically at boot time:

          chkconfig salt-master on

       To start the Master:

          rcsalt-master start

       Minion

       To have the Minion start automatically at boot time:

          chkconfig salt-minion on

       To start the Minion:

          rcsalt-minion start

   Unstable Release
   openSUSE
       For openSUSE Tumbleweed run the following as root:

          zypper addrepo http://download.opensuse.org/repositories/systemsmanagement:/saltstack/openSUSE_Tumbleweed/systemsmanagement:saltstack.repo
          zypper refresh
          zypper install salt salt-minion salt-master

       For openSUSE 42.1 Leap run the following as root:

          zypper addrepo http://download.opensuse.org/repositories/systemsmanagement:/saltstack/openSUSE_Leap_42.1/systemsmanagement:saltstack.repo
          zypper refresh
          zypper install salt salt-minion salt-master

       For openSUSE 13.2 run the following as root:

          zypper addrepo http://download.opensuse.org/repositories/systemsmanagement:/saltstack/openSUSE_13.2/systemsmanagement:saltstack.repo
          zypper refresh
          zypper install salt salt-minion salt-master

   SUSE Linux Enterprise
       For SLE 12 run the following as root:

          zypper addrepo http://download.opensuse.org/repositories/systemsmanagement:/saltstack/SLE_12/systemsmanagement:saltstack.repo
          zypper refresh
          zypper install salt salt-minion salt-master

       For SLE 11 SP4 run the following as root:

          zypper addrepo http://download.opensuse.org/repositories/systemsmanagement:/saltstack/SLE_11_SP4/systemsmanagement:saltstack.repo
          zypper refresh
          zypper install salt salt-minion salt-master

       Now go to the Configuring Salt page.

   Initial Configuration
   Configuring Salt
       Salt configuration is very simple. The default configuration for the master will work  for
       most installations and the only requirement for setting up a minion is to set the location
       of the master in the minion configuration file.

       The configuration files will be installed to /etc/salt and are named after the  respective
       components, /etc/salt/master, and /etc/salt/minion.

   Master Configuration
       By  default the Salt master listens on ports 4505 and 4506 on all interfaces (0.0.0.0). To
       bind Salt to a specific IP, redefine the “interface” directive in the master configuration
       file, typically /etc/salt/master, as follows:

          - #interface: 0.0.0.0
          + interface: 10.0.0.1

       After  updating  the  configuration  file,  restart  the  Salt  master.   See  the  master
       configuration reference for more details about other configurable options.

   Minion Configuration
       Although there are many Salt Minion configuration options, configuring a  Salt  Minion  is
       very  simple.  By default a Salt Minion will try to connect to the DNS name “salt”; if the
       Minion is able to resolve that name correctly, no configuration is needed.

       If the DNS name “salt” does not resolve to point to the correct location  of  the  Master,
       redefine   the   “master”   directive   in   the   minion  configuration  file,  typically
       /etc/salt/minion, as follows:

          - #master: salt
          + master: 10.0.0.1

       After  updating  the  configuration  file,  restart  the  Salt  minion.   See  the  minion
       configuration reference for more details about other configurable options.

   Proxy Minion Configuration
       A proxy minion emulates the behaviour of a regular minion and inherits their options.

       Similarly, the configuration file is /etc/salt/proxy and the proxy tries to connect to the
       DNS name “salt”.

       In addition to the regular minion options, there are  several  proxy-specific  -  see  the
       proxy minion configuration reference.

   Running Salt
       1. Start the master in the foreground (to daemonize the process, pass the -d flag):

             salt-master

       2. Start the minion in the foreground (to daemonize the process, pass the -d flag):

             salt-minion

          Having trouble?

                 The  simplest  way  to  troubleshoot Salt is to run the master and minion in the
                 foreground with log level set to debug:

              salt-master --log-level=debug

          For information on salt’s logging system please see the logging document.

          Run as an unprivileged (non-root) user

                 To run Salt as another user, set the user parameter in the master config file.

                 Additionally, ownership, and permissions need to be set such  that  the  desired
                 user   can  read  from  and  write  to  the  following  directories  (and  their
                 subdirectories, where applicable):

          · /etc/salt

          · /var/cache/salt

          · /var/log/salt

          · /var/run/salt

          More information about running salt as a non-privileged user can be found here.

       There is also a full troubleshooting guide available.

   Key Identity
       Salt provides commands to validate the identity of  your  Salt  master  and  Salt  minions
       before  the  initial  key  exchange.  Validating  key  identity  helps avoid inadvertently
       connecting to the wrong Salt master, and  helps  prevent  a  potential  MiTM  attack  when
       establishing the initial connection.

   Master Key Fingerprint
       Print the master key fingerprint by running the following command on the Salt master:

          salt-key -F master

       Copy  the  master.pub  fingerprint from the Local Keys section, and then set this value as
       the master_finger in the minion configuration file. Save the configuration file  and  then
       restart the Salt minion.

   Minion Key Fingerprint
       Run the following command on each Salt minion to view the minion key fingerprint:

          salt-call --local key.finger

       Compare  this  value  to  the  value  that is displayed when you run the salt-key --finger
       <MINION_ID> command on the Salt master.

   Key Management
       Salt uses AES encryption for all communication between the Master  and  the  Minion.  This
       ensures  that  the  commands  sent  to  the  Minions  cannot  be  tampered  with, and that
       communication between Master and Minion is authenticated through trusted, accepted keys.

       Before commands can be sent to a Minion, its key must be accepted on the Master.  Run  the
       salt-key command to list the keys known to the Salt Master:

          [root@master ~]# salt-key -L
          Unaccepted Keys:
          alpha
          bravo
          charlie
          delta
          Accepted Keys:

       This example shows that the Salt Master is aware of four Minions, but none of the keys has
       been accepted. To accept the keys and allow the Minions to be controlled  by  the  Master,
       again use the salt-key command:

          [root@master ~]# salt-key -A
          [root@master ~]# salt-key -L
          Unaccepted Keys:
          Accepted Keys:
          alpha
          bravo
          charlie
          delta

       The  salt-key  command allows for signing keys individually or in bulk. The example above,
       using -A bulk-accepts all pending keys. To accept keys individually use the  lowercase  of
       the same option, -a keyname.

       SEE ALSO:
          salt-key manpage

   Sending Commands
       Communication  between  the  Master  and a Minion may be verified by running the test.ping
       command:

          [root@master ~]# salt alpha test.ping
          alpha:
              True

       Communication between the Master and all Minions may be tested in a similar way:

          [root@master ~]# salt '*' test.ping
          alpha:
              True
          bravo:
              True
          charlie:
              True
          delta:
              True

       Each of the Minions should send a True response as shown above.

   What’s Next?
       Understanding targeting is important. From there, depending on the way  you  wish  to  use
       Salt,  you  should  also  proceed  to  learn  about  Remote  Execution  and  Configuration
       Management.

   Additional Installation Guides
   Salt Bootstrap
       The Salt Bootstrap Script allows a user to install the Salt Minion or Master on a  variety
       of system distributions and versions.

       The  Salt  Bootstrap  Script  is  a  shell  script is known as bootstrap-salt.sh.  It runs
       through a series of checks to determine the operating system type  and  version.  It  then
       installs the Salt binaries using the appropriate methods.

       The  Salt  Bootstrap  Script installs the minimum number of packages required to run Salt.
       This means that in the event you run the bootstrap to install via package, Git will not be
       installed.  Installing  the  minimum  number  of packages helps ensure the script stays as
       lightweight as possible, assuming the user will install any other required packages  after
       the Salt binaries are present on the system.

       The  Salt  Bootstrap  Script is maintained in a separate repo from Salt, complete with its
       own issues, pull requests, contributing guidelines, release protocol, etc.

       To learn more, please see the Salt Bootstrap repo links:

       · Salt Bootstrap repo

       · README: includes supported operating systems, example usage, and more.

       · Contributing Guidelines

       · Release Process

       NOTE:
          The  Salt  Bootstrap   script   can   be   found   in   the   Salt   repo   under   the
          salt/cloud/deploy/bootstrap-salt.sh path. Any changes to this file will be overwritten!
          Bug fixes and feature additions must be submitted via the Salt Bootstrap  repo.  Please
          see the Salt Bootstrap Script’s Release Process for more information.

   Opening the Firewall up for Salt
       The  Salt  master  communicates with the minions using an AES-encrypted ZeroMQ connection.
       These communications are done over TCP ports 4505 and 4506, which need to be accessible on
       the  master  only.  This  document  outlines  suggested  firewall rules for allowing these
       incoming connections to the master.

       NOTE:
          No firewall configuration needs to be done on Salt minions. These changes refer to  the
          master only.

   Fedora 18 and beyond / RHEL 7 / CentOS 7
       Starting  with  Fedora  18  FirewallD  is  the tool that is used to dynamically manage the
       firewall rules on a host. It has support for IPv4/6 settings and the separation of runtime
       and  permanent  configurations.  To  interact  with  FirewallD use the command line client
       firewall-cmd.

       firewall-cmd example:

          firewall-cmd --permanent --zone=<zone> --add-port=4505-4506/tcp

       Please choose the desired zone according to your setup. Don’t forget to reload  after  you
       made your changes.

          firewall-cmd --reload

   RHEL 6 / CentOS 6
       The  lokkit command packaged with some Linux distributions makes opening iptables firewall
       ports very simple via the command line. Just be careful to not  lock  out  access  to  the
       server by neglecting to open the ssh port.

       lokkit example:

          lokkit -p 22:tcp -p 4505:tcp -p 4506:tcp

       The  system-config-firewall-tui  command  provides a text-based interface to modifying the
       firewall.

       system-config-firewall-tui:

          system-config-firewall-tui

   openSUSE
       Salt installs  firewall  rules  in  /etc/sysconfig/SuSEfirewall2.d/services/salt.   Enable
       with:

          SuSEfirewall2 open
          SuSEfirewall2 start

       If  you  have an older package of Salt where the above configuration file is not included,
       the SuSEfirewall2 command makes opening  iptables  firewall  ports  very  simple  via  the
       command line.

       SuSEfirewall example:

          SuSEfirewall2 open EXT TCP 4505
          SuSEfirewall2 open EXT TCP 4506

       The firewall module in YaST2 provides a text-based interface to modifying the firewall.

       YaST2:

          yast2 firewall

   Windows
       Windows  Firewall  is the default component of Microsoft Windows that provides firewalling
       and packet filtering. There are many 3rd party firewalls available for  Windows,  some  of
       which  use  rules  from  the  Windows  Firewall.  If you are experiencing problems see the
       vendor’s specific documentation for opening the required ports.

       The Windows Firewall can be configured using the Windows Interface  or  from  the  command
       line.

       Windows Firewall (interface):

       1. Open  the Windows Firewall Interface by typing wf.msc at the command prompt or in a run
          dialog (Windows Key + R)

       2. Navigate to Inbound Rules in the console tree

       3. Add a new rule by clicking New Rule… in the Actions area

       4. Change the Rule Type to Port. Click Next

       5. Set the Protocol to TCP and specify local ports 4505-4506. Click Next

       6. Set the Action to Allow the connection. Click Next

       7. Apply the rule to Domain, Private, and Public. Click Next

       8. Give the new rule a Name, ie: Salt. You may also add a description. Click Finish

       Windows Firewall (command line):

       The Windows Firewall rule can be created by issuing a single command.  Run  the  following
       command from the command line or a run prompt:

          netsh advfirewall firewall add rule name="Salt" dir=in action=allow protocol=TCP localport=4505-4506

   iptables
       Different  Linux  distributions  store  their  iptables (also known as netfilter) rules in
       different places, which makes it difficult to standardize firewall documentation. Included
       are some of the more common locations, but your mileage may vary.

       Fedora / RHEL / CentOS:

          /etc/sysconfig/iptables

       Arch Linux:

          /etc/iptables/iptables.rules

       Debian

       Follow these instructions: https://wiki.debian.org/iptables

       Once  you’ve  found  your  firewall rules, you’ll need to add the two lines below to allow
       traffic on tcp/4505 and tcp/4506:

          -A INPUT -m state --state new -m tcp -p tcp --dport 4505 -j ACCEPT
          -A INPUT -m state --state new -m tcp -p tcp --dport 4506 -j ACCEPT

       Ubuntu

       Salt installs firewall rules in /etc/ufw/applications.d/salt.ufw. Enable with:

          ufw allow salt

   pf.conf
       The BSD-family of operating  systems  uses  packet  filter  (pf).  The  following  example
       describes the additions to pf.conf needed to access the Salt master.

          pass in on $int_if proto tcp from any to $int_if port 4505
          pass in on $int_if proto tcp from any to $int_if port 4506

       Once  these  additions  have  been made to the pf.conf the rules will need to be reloaded.
       This can be done using the pfctl command.

          pfctl -vf /etc/pf.conf

   Whitelist communication to Master
       There are situations where you want to selectively  allow  Minion  traffic  from  specific
       hosts  or  networks  into  your  Salt Master. The first scenario which comes to mind is to
       prevent unwanted traffic to your Master out of security concerns, but another scenario  is
       to  handle  Minion  upgrades  when  there  are  backwards incompatible changes between the
       installed Salt versions in your environment.

       Here is an example Linux iptables ruleset to be set on the Master:

          # Allow Minions from these networks
          -I INPUT -s 10.1.2.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT
          -I INPUT -s 10.1.3.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT
          # Allow Salt to communicate with Master on the loopback interface
          -A INPUT -i lo -p tcp -m multiport --dports 4505,4506 -j ACCEPT
          # Reject everything else
          -A INPUT -p tcp -m multiport --dports 4505,4506 -j REJECT

       NOTE:
          The important thing to note here is that the salt command needs to communicate with the
          listening  network  socket  of  salt-master on the loopback interface. Without this you
          will see no outgoing Salt  traffic  from  the  master,  even  for  a  simple  salt  '*'
          test.ping,  because  the  salt client never reached the salt-master to tell it to carry
          out the execution.

   Preseed Minion with Accepted Key
       In some situations, it is not convenient to wait for a minion to  start  before  accepting
       its  key  on the master. For instance, you may want the minion to bootstrap itself as soon
       as it comes online. You may also want to let your  developers  provision  new  development
       machines on the fly.

       SEE ALSO:
          Many ways to preseed minion keys

          Salt  has  other  ways to generate and pre-accept minion keys in addition to the manual
          steps outlined below.

          salt-cloud performs these same steps automatically  when  new  cloud  VMs  are  created
          (unless instructed not to).

          salt-api  exposes  an  HTTP  call  to  Salt’s REST API to generate and download the new
          minion keys as a tarball.

       There is a general four step process to do this:

       1. Generate the keys on the master:

          root@saltmaster# salt-key --gen-keys=[key_name]

       Pick a name for the key, such as the minion’s id.

       2. Add the public key to the accepted minion folder:

          root@saltmaster# cp key_name.pub /etc/salt/pki/master/minions/[minion_id]

       It is necessary that the public key file has the same name as your minion id.  This is how
       Salt  matches  minions  with  their  keys.  Also  note  that  the pki folder could be in a
       different location, depending on your OS or if specified in the master config file.

       3. Distribute the minion keys.

       There is no single method to get the keypair to your minion.  The difficulty is finding  a
       distribution  method  which is secure. For Amazon EC2 only, an AWS best practice is to use
       IAM       Roles       to       pass       credentials.       (See        blog        post,
       http://blogs.aws.amazon.com/security/post/Tx610S2MLVZWEA/Using-IAM-roles-to-distribute-non-AWS-credentials-to-your-EC2-instances
       )

          Security Warning

                 Since the minion key is already accepted on the master, distributing the private
                 key  poses a potential security risk. A malicious party will have access to your
                 entire state tree and other sensitive data if they gain access  to  a  preseeded
                 minion key.

       4. Preseed the Minion with the keys

       You will want to place the minion keys before starting the salt-minion daemon:

          /etc/salt/pki/minion/minion.pem
          /etc/salt/pki/minion/minion.pub

       Once  in  place,  you should be able to start salt-minion and run salt-call state.apply or
       any other salt commands that require master authentication.

   The macOS (Maverick) Developer Step By Step Guide To Salt Installation
       This document provides a step-by-step guide to installing a  Salt  cluster  consisting  of
       one master, and one minion running on a local VM hosted on macOS.

       NOTE:
          This  guide  is  aimed  at  developers  who wish to run Salt in a virtual machine.  The
          official (Linux) walkthrough can be found here.

   The 5 Cent Salt Intro
       Since you’re here you’ve probably already heard about Salt, so you already know Salt  lets
       you  configure and run commands on hordes of servers easily.  Here’s a brief overview of a
       Salt cluster:

       · Salt works by having a “master” server sending commands  to  one  or  multiple  “minion”
         servers.  The  master  server is the “command center”. It is going to be the place where
         you store your configuration files, aka: “which server is  the  db,  which  is  the  web
         server, and what libraries and software they should have installed”. The minions receive
         orders from the master.  Minions are the  servers  actually  performing  work  for  your
         business.

       · Salt has two types of configuration files:

         1.  the  “salt  communication  channels” or “meta”  or “config” configuration files (not
         official names): one for the  master  (usually  is  /etc/salt/master  ,  on  the  master
         server),  and  one for minions (default is /etc/salt/minion or /etc/salt/minion.conf, on
         the minion servers). Those files are used to determine things like the Salt  Master  IP,
         port,  Salt  folder  locations,  etc.. If these are configured incorrectly, your minions
         will probably be unable to receive orders from the master, or the master will  not  know
         which software a given minion should install.

         2.  the  “business” or “service” configuration files (once again, not an official name):
         these are configuration  files,  ending  with  “.sls”  extension,  that  describe  which
         software  should run on which server, along with particular configuration properties for
         the software that is being installed. These files should be  created  in  the  /srv/salt
         folder  by  default,  but  their  location  can  be  changed  using  …  /etc/salt/master
         configuration file!

       NOTE:
          This tutorial contains a third important configuration file, not to  be  confused  with
          the  previous  two: the virtual machine provisioning configuration file. This in itself
          is not specifically tied to Salt, but it also contains some Salt configuration. More on
          that  in  step 3. Also note that all configuration files are YAML files. So indentation
          matters.

       NOTE:
          Salt also works with “masterless” configuration where a minion is autonomous (in  which
          case  salt  can  be  seen  as  a  local  configuration  tool),  or in “multiple master”
          configuration. See the documentation for more on that.

   Before Digging In, The Architecture Of The Salt Cluster
   Salt Master
       The “Salt master” server is going to be the Mac OS machine, directly. Commands will be run
       from  a  terminal  app, so Salt will need to be installed on the Mac.  This is going to be
       more convenient for toying around with configuration files.

   Salt Minion
       We’ll only have one “Salt minion” server. It is going to be running on a  Virtual  Machine
       running on the Mac, using VirtualBox. It will run an Ubuntu distribution.

   Step 1 - Configuring The Salt Master On Your Mac
       Official Documentation

       Because  Salt  has a lot of dependencies that are not built in macOS, we will use Homebrew
       to install Salt. Homebrew is a package manager for Mac,  it’s  great,  use  it  (for  this
       tutorial  at  least!).  Some  people spend a lot of time installing libs by hand to better
       understand dependencies, and then realize how useful a package  manager  is  once  they’re
       configuring  a  brand  new  machine  and  have  to  do it all over again. It also lets you
       uninstall things easily.

       NOTE:
          Brew is a Ruby program (Ruby is installed by default with your  Mac).  Brew  downloads,
          compiles,  and  links software. The linking phase is when compiled software is deployed
          on your machine. It may conflict with manually installed software,  especially  in  the
          /usr/local  directory.  It’s ok, remove the manually installed version then refresh the
          link by typing brew link 'packageName'. Brew has a brew doctor command  that  can  help
          you  troubleshoot. It’s a great command, use it often. Brew requires xcode command line
          tools. When you run brew the first time it asks you to  install  them  if  they’re  not
          already  on  your  system. Brew installs software in /usr/local/bin (system bins are in
          /usr/bin). In order to use those bins you need your $PATH to search there  first.  Brew
          tells you if your $PATH needs to be fixed.

       TIP:
          Use  the  keyboard  shortcut  cmd  +  shift  + period in the “open” macOS dialog box to
          display hidden files and folders, such as .profile.

   Install Homebrew
       Install Homebrew here http://brew.sh/

       Or just type

          ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

       Now type the following commands in your terminal (you may want to type brew  doctor  after
       each to make sure everything’s fine):

          brew install python
          brew install swig
          brew install zmq

       NOTE:
          zmq is ZeroMQ. It’s a fantastic library used for server to server network communication
          and is at the core of Salt efficiency.

   Install Salt
       You should now have everything ready to launch this command:

          pip install salt

       NOTE:
          There should be no need for sudo pip install salt. Brew installed Python for your user,
          so  you  should have all the access. In case you would like to check, type which python
          to  ensure  that  it’s  /usr/local/bin/python,  and   which   pip   which   should   be
          /usr/local/bin/pip.

       Now  type  python in a terminal then, import salt. There should be no errors. Now exit the
       Python terminal using exit().

   Create The Master Configuration
       If the default /etc/salt/master configuration file was not  created,  copy-paste  it  from
       here:
       http://docs.saltstack.com/ref/configuration/examples.html#configuration-examples-master

       NOTE:
          /etc/salt/master is a file, not a folder.

       Salt Master configuration changes. The Salt master needs a few customization to be able to
       run on macOS:

          sudo launchctl limit maxfiles 4096 8192

       In  the  /etc/salt/master  file,  change  max_open_files  to  8192  (or just add the line:
       max_open_files: 8192 (no quote) if it doesn’t already exists).

       You should now be able to launch the Salt master:

          sudo salt-master --log-level=all

       There should be no errors when running the above command.

       NOTE:
          This command is supposed to be a daemon, but for toying around, we’ll keep  it  running
          on a terminal to monitor the activity.

       Now that the master is set, let’s configure a minion on a VM.

   Step 2 - Configuring The Minion VM
       The  Salt minion is going to run on a Virtual Machine. There are a lot of software options
       that let you run virtual machines on a mac, But for  this  tutorial  we’re  going  to  use
       VirtualBox. In addition to virtualBox, we will use Vagrant, which allows you to create the
       base VM configuration.

       Vagrant lets you build ready to use VM images, starting from an OS image  and  customizing
       it using “provisioners”. In our case, we’ll use it to:

       · Download the base Ubuntu image

       · Install salt on that Ubuntu image (Salt is going to be the “provisioner” for the VM).

       · Launch the VM

       · SSH into the VM to debug

       · Stop the VM once you’re done.

   Install VirtualBox
       Go  get  it here: https://www.virtualBox.org/wiki/Downloads (click on VirtualBox for macOS
       hosts => x86/amd64)

   Install Vagrant
       Go get it here: http://downloads.vagrantup.com/ and choose the latest  version  (1.3.5  at
       time  of  writing), then the .dmg file. Double-click to install it.  Make sure the vagrant
       command is found when run in the terminal. Type vagrant.  It  should  display  a  list  of
       commands.

   Create The Minion VM Folder
       Create  a folder in which you will store your minion’s VM. In this tutorial, it’s going to
       be a minion folder in the $home directory.

          cd $home
          mkdir minion

   Initialize Vagrant
       From the minion folder, type

          vagrant init

       This command creates a default Vagrantfile configuration  file.  This  configuration  file
       will be used to pass configuration parameters to the Salt provisioner in Step 3.

   Import Precise64 Ubuntu Box
          vagrant box add precise64 http://files.vagrantup.com/precise64.box

       NOTE:
          This  box  is added at the global Vagrant level. You only need to do it once as each VM
          will use this same file.

   Modify the Vagrantfile
       Modify ./minion/Vagrantfile to use th precise64 box. Change the config.vm.box line to:

          config.vm.box = "precise64"

       Uncomment the line creating a host-only IP. This is the ip of your minion (you can  change
       it to something else if that IP is already in use):

          config.vm.network :private_network, ip: "192.168.33.10"

       At this point you should have a VM that can run, although there won’t be much in it. Let’s
       check that.

   Checking The VM
       From the $home/minion folder type:

          vagrant up

       A log showing the VM booting should be present. Once it’s  done  you’ll  be  back  to  the
       terminal:

          ping 192.168.33.10

       The VM should respond to your ping request.

       Now log into the VM in ssh using Vagrant again:

          vagrant ssh

       You  should  see  the  shell  prompt  change  to something similar to vagrant@precise64:~$
       meaning you’re inside the VM. From there, enter the following:

          ping 10.0.2.2

       NOTE:
          That ip is the ip of your VM host (the macOS host). The number is a VirtualBox  default
          and  is  displayed  in the log after the Vagrant ssh command. We’ll use that IP to tell
          the minion where the Salt master is.  Once you’re done, end the ssh session  by  typing
          exit.

       It’s now time to connect the VM to the salt master

   Step 3 - Connecting Master and Minion
   Creating The Minion Configuration File
       Create the /etc/salt/minion file. In that file, put the following lines, giving the ID for
       this minion, and the IP of the master:

          master: 10.0.2.2
          id: 'minion1'
          file_client: remote

       Minions authenticate with the master using keys. Keys are generated automatically  if  you
       don’t  provide  one  and  can  accept  them later on. However, this requires accepting the
       minion key every time the minion is destroyed or created (which could be quite  often).  A
       better way is to create those keys in advance, feed them to the minion, and authorize them
       once.

   Preseed minion keys
       From the minion folder on your Mac run:

          sudo salt-key --gen-keys=minion1

       This should create two files: minion1.pem, and minion1.pub.  Since those files  have  been
       created using sudo, but will be used by vagrant, you need to change ownership:

          sudo chown youruser:yourgroup minion1.pem
          sudo chown youruser:yourgroup minion1.pub

       Then copy the .pub file into the list of accepted minions:

          sudo cp minion1.pub /etc/salt/pki/master/minions/minion1

   Modify Vagrantfile to Use Salt Provisioner
       Let’s  now modify the Vagrantfile used to provision the Salt VM. Add the following section
       in the Vagrantfile (note: it should  be  at  the  same  indentation  level  as  the  other
       properties):

          # salt-vagrant config
          config.vm.provision :salt do |salt|
              salt.run_highstate = true
              salt.minion_config = "/etc/salt/minion"
              salt.minion_key = "./minion1.pem"
              salt.minion_pub = "./minion1.pub"
          end

       Now destroy the vm and recreate it from the /minion folder:

          vagrant destroy
          vagrant up

       If everything is fine you should see the following message:

          "Bootstrapping Salt... (this may take a while)
          Salt successfully configured and installed!"

   Checking Master-Minion Communication
       To make sure the master and minion are talking to each other, enter the following:

          sudo salt '*' test.ping

       You should see your minion answering the ping. It’s now time to do some configuration.

   Step 4 - Configure Services to Install On the Minion
       In this step we’ll use the Salt master to instruct our minion to install Nginx.

   Checking the system’s original state
       First,  make  sure  that  an  HTTP  server is not installed on our minion.  When opening a
       browser directed at http://192.168.33.10/ You should get an error saying the  site  cannot
       be reached.

   Initialize the top.sls file
       System configuration is done in /srv/salt/top.sls (and subfiles/folders), and then applied
       by running the state.apply function to have the Salt master order its  minions  to  update
       their instructions and run the associated commands.

       First Create an empty file on your Salt master (macOS machine):

          touch /srv/salt/top.sls

       When  the  file  is  empty,  or  if  no  configuration is found for our minion an error is
       reported:

          sudo salt 'minion1' state.apply

       This should return an error stating: No Top file or external nodes data matches found.

   Create The Nginx Configuration
       Now is finally the time to enter the real meat of our server’s  configuration.   For  this
       tutorial our minion will be treated as a web server that needs to have Nginx installed.

       Insert the following lines into /srv/salt/top.sls (which should current be empty).

          base:
            'minion1':
              - bin.nginx

       Now create /srv/salt/bin/nginx.sls containing the following:

          nginx:
            pkg.installed:
              - name: nginx
            service.running:
              - enable: True
              - reload: True

   Check Minion State
       Finally, run the state.apply function again:

          sudo salt 'minion1' state.apply

       You  should  see  a  log showing that the Nginx package has been installed and the service
       configured. To prove it, open your browser  and  navigate  to  http://192.168.33.10/,  you
       should see the standard Nginx welcome page.

       Congratulations!

   Where To Go From Here
       A  full description of configuration management within Salt (sls files among other things)
       is available here: http://docs.saltstack.com/en/latest/index.html#configuration-management

   running salt as normal user tutorial
       Before continuing make sure  you  have  a  working  Salt  installation  by  following  the
       installation and the configuration instructions.

          Stuck?

                 There  are  many  ways to get help from the Salt community including our mailing
                 list and our IRC channel #salt.

   Running Salt functions as non root user
       If you don’t want to run salt cloud as root or even install it you  can  configure  it  to
       have a virtual root in your working directory.

       The salt system uses the salt.syspath module to find the variables

       If you run the salt-build, it will generated in:

          ./build/lib.linux-x86_64-2.7/salt/_syspaths.py

       To generate it, run the command:

          python setup.py build

       Copy the generated module into your salt directory

          cp ./build/lib.linux-x86_64-2.7/salt/_syspaths.py salt/_syspaths.py

       Edit it to include needed variables and your new paths

          # you need to edit this
          ROOT_DIR = *your current dir* + '/salt/root'

          # you need to edit this
          INSTALL_DIR = *location of source code*

          CONFIG_DIR =  ROOT_DIR + '/etc/salt'
          CACHE_DIR = ROOT_DIR + '/var/cache/salt'
          SOCK_DIR = ROOT_DIR + '/var/run/salt'
          SRV_ROOT_DIR= ROOT_DIR + '/srv'
          BASE_FILE_ROOTS_DIR = ROOT_DIR + '/srv/salt'
          BASE_PILLAR_ROOTS_DIR = ROOT_DIR + '/srv/pillar'
          BASE_MASTER_ROOTS_DIR = ROOT_DIR + '/srv/salt-master'
          LOGS_DIR = ROOT_DIR + '/var/log/salt'
          PIDFILE_DIR = ROOT_DIR + '/var/run'
          CLOUD_DIR = INSTALL_DIR + '/cloud'
          BOOTSTRAP = CLOUD_DIR + '/deploy/bootstrap-salt.sh'

       Create the directory structure

          mkdir -p root/etc/salt root/var/cache/run root/run/salt root/srv
          root/srv/salt root/srv/pillar root/srv/salt-master root/var/log/salt root/var/run

       Populate the configuration files:

          cp -r conf/* root/etc/salt/

       Edit your root/etc/salt/master configuration that is used by salt-cloud:

          user: *your user name*

       Run like this:

          PYTHONPATH=`pwd` scripts/salt-cloud

   Standalone Minion
       Since  the  Salt  minion  contains such extensive functionality it can be useful to run it
       standalone. A standalone minion can be used to do a number of things:

       · Use salt-call commands on a system without connectivity to a master

       · Masterless States, run states entirely from files local to the minion

       NOTE:
          When running Salt in masterless mode, do not run the salt-minion daemon.  Otherwise, it
          will  attempt  to connect to a master and fail. The salt-call command stands on its own
          and does not need the salt-minion daemon.

   Minion Configuration
       Throughout this document there are several references  to  setting  different  options  to
       configure a masterless Minion. Salt Minions are easy to configure via a configuration file
       that is located, by default, in /etc/salt/minion.  Note, however, that on FreeBSD systems,
       the minion configuration file is located in /usr/local/etc/salt/minion.

       You  can  learn more about minion configuration options in the Configuring the Salt Minion
       docs.

   Telling Salt Call to Run Masterless
       The salt-call command is used to run module functions  locally  on  a  minion  instead  of
       executing  them  from the master. Normally the salt-call command checks into the master to
       retrieve file server and pillar data, but when running standalone salt-call  needs  to  be
       instructed to not check the master for this data. To instruct the minion to not look for a
       master when running salt-call the file_client configuration option needs to  be  set.   By
       default  the  file_client  is  set to remote so that the minion knows that file server and
       pillar data are to be gathered from the master. When setting  the  file_client  option  to
       local the minion is configured to not gather this data from the master.

          file_client: local

       Now the salt-call command will not look for a master and will assume that the local system
       has all of the file and pillar resources.

   Running States Masterless
       The state system can be easily run without a Salt master, with all needed files  local  to
       the  minion.  To  do  this the minion configuration file needs to be set up to know how to
       return file_roots  information  like  the  master.  The  file_roots  setting  defaults  to
       /srv/salt for the base environment just like on the master:

          file_roots:
            base:
              - /srv/salt

       Now  set up the Salt State Tree, top file, and SLS modules in the same way that they would
       be set up on a master. Now, with the file_client option set  to  local  and  an  available
       state  tree  then  calls  to functions in the state module will use the information in the
       file_roots on the minion instead of checking in with the master.

       Remember that when creating a state tree on a minion there are no syntax or  path  changes
       needed, SLS modules written to be used from a master do not need to be modified in any way
       to work with a minion.

       This makes it easy to “script” deployments with Salt states without having  to  set  up  a
       master,  and  allows  for  these  SLS modules to be easily moved into a Salt master as the
       deployment grows.

       The declared state can now be executed with:

          salt-call state.apply

       Or the salt-call command can be executed with the --local flag, this makes it  unnecessary
       to change the configuration file:

          salt-call state.apply --local

   External Pillars
       External pillars are supported when running in masterless mode.

   Salt Masterless Quickstart
       Running a masterless salt-minion lets you use Salt’s configuration management for a single
       machine without calling out to a Salt master on another machine.

       Since the Salt minion contains such extensive functionality it can be  useful  to  run  it
       standalone. A standalone minion can be used to do a number of things:

       · Stand up a master server via States (Salting a Salt Master)

       · Use salt-call commands on a system without connectivity to a master

       · Masterless States, run states entirely from files local to the minion

       It is also useful for testing out state trees before deploying to a production setup.

   Bootstrap Salt Minion
       The  salt-bootstrap script makes bootstrapping a server with Salt simple for any OS with a
       Bourne shell:

          curl -L https://bootstrap.saltstack.com -o bootstrap_salt.sh
          sudo sh bootstrap_salt.sh

       See the salt-bootstrap documentation for other one liners. When using Vagrant to test  out
       salt, the Vagrant salt provisioner will provision the VM for you.

   Telling Salt to Run Masterless
       To  instruct  the  minion  to  not look for a master, the file_client configuration option
       needs to be set in the minion configuration file.  By default the file_client  is  set  to
       remote  so that the minion gathers file server and pillar data from the salt master.  When
       setting the file_client option to local the minion is configured to not gather  this  data
       from the master.

          file_client: local

       Now  the  salt minion will not look for a master and will assume that the local system has
       all of the file and pillar resources.

       Configuration which resided in the master configuration (e.g. /etc/salt/master) should  be
       moved to the minion configuration since the minion does not read the master configuration.

       NOTE:
          When running Salt in masterless mode, do not run the salt-minion daemon.  Otherwise, it
          will attempt to connect to a master and fail. The salt-call command stands on  its  own
          and does not need the salt-minion daemon.

   Create State Tree
       Following the successful installation of a salt-minion, the next step is to create a state
       tree, which is where the SLS files that comprise the possible states  of  the  minion  are
       stored.

       The  following  example  walks  through  the  steps  necessary to create a state tree that
       ensures that the server has the Apache webserver installed.

       NOTE:
          For a complete explanation on Salt States, see the tutorial.

       1. Create the top.sls file:

       /srv/salt/top.sls:

          base:
            '*':
              - webserver

       2. Create the webserver state tree:

       /srv/salt/webserver.sls:

          apache:               # ID declaration
            pkg:                # state declaration
              - installed       # function declaration

       NOTE:
          The apache package  has  different  names  on  different  platforms,  for  instance  on
          Debian/Ubuntu it is apache2, on Fedora/RHEL it is httpd and on Arch it is apache

       The only thing left is to provision our minion using salt-call.

   Salt-call
       The  salt-call  command  is  used  to  run  remote execution functions locally on a minion
       instead of executing them from the master. Normally the salt-call command checks into  the
       master  to  retrieve  file  server  and pillar data, but when running standalone salt-call
       needs to be instructed to not check the master for this data:

          salt-call --local state.apply

       The --local flag tells the salt-minion to look for the state tree in the local file system
       and not to contact a Salt Master for instructions.

       To provide verbose output, use -l debug:

          salt-call --local state.apply -l debug

       The  minion  first examines the top.sls file and determines that it is a part of the group
       matched by * glob and that the webserver SLS should be applied.

       It then examines the webserver.sls file and finds the apache  state,  which  installs  the
       Apache package.

       The minion should now have Apache installed, and the next step is to begin learning how to
       write more complex states.

   Dependencies
       Salt should run on any Unix-like platform so long as the dependencies are met.

       · Python 2.7 >= 2.7 <3.0

       · msgpack-python - High-performance message interchange format

       · YAML - Python YAML bindings

       · Jinja2 - parsing Salt States (configurable in the master settings)

       · MarkupSafe - Implements a XML/HTML/XHTML Markup safe string for Python

       · apache-libcloud - Python lib for interacting with many  of  the  popular  cloud  service
         providers using a unified API

       · Requests - HTTP library

       · Tornado - Web framework and asynchronous networking library

       · futures - Backport of the concurrent.futures package from Python 3.2

       Depending on the chosen Salt transport, ZeroMQ or RAET, dependencies vary:

       · ZeroMQ:

         · ZeroMQ >= 3.2.0

         · pyzmq >= 2.2.0 - ZeroMQ Python bindings

         · PyCrypto - The Python cryptography toolkit

       · RAET:

         · libnacl - Python bindings to libsodium

         · ioflo - The flo programming interface raet and salt-raet is built on

         · RAET - The worlds most awesome UDP protocol

       Salt  defaults  to  the  ZeroMQ transport, and the choice can be made at install time, for
       example:

          python setup.py --salt-transport=raet install

       This way, only the required dependencies are pulled by the setup script if need be.

       If installing using pip, the --salt-transport install option can be provided like:

          pip install --install-option="--salt-transport=raet" salt

       NOTE:
          Salt does not bundle dependencies that are typically distributed as part  of  the  base
          OS.  If you have unmet dependencies and are using a custom or minimal installation, you
          might need to install some additional packages from your OS vendor.

   Optional Dependencies
       · mako - an optional parser for Salt States (configurable in the master settings)

       · gcc - dynamic Cython module compiling

   Upgrading Salt
       When  upgrading  Salt,  the  master(s)  should  always  be   upgraded   first.    Backward
       compatibility  for  minions  running  newer  versions  of  salt  than their masters is not
       guaranteed.

       Whenever possible, backward compatibility between new masters  and  old  minions  will  be
       preserved.   Generally,  the  only  exception  to  this  policy  is  in case of a security
       vulnerability.

       SEE ALSO:
          Installing Salt for development and contributing to the project.

   Building Packages using Salt Pack
       Salt-pack is an open-source package builder for most commonly used  Linux  platforms,  for
       example:   Redhat/CentOS  and  Debian/Ubuntu  families,  utilizing  SaltStack  states  and
       execution modules to build Salt and a specified set of dependencies, from which a platform
       specific repository can be built.

       https://github.com/saltstack/salt-pack

CONFIGURING SALT

       This section explains how to configure user access, view and store job results, secure and
       troubleshoot, and how to perform many other administrative tasks.

   Configuring the Salt Master
       The Salt system is amazingly simple and easy to configure, the two components of the  Salt
       system  each  have  a respective configuration file. The salt-master is configured via the
       master configuration file, and the salt-minion is configured via the minion  configuration
       file.

       SEE ALSO:
          Example master configuration file.

       The  configuration  file  for the salt-master is located at /etc/salt/master by default. A
       notable  exception  is   FreeBSD,   where   the   configuration   file   is   located   at
       /usr/local/etc/salt. The available options are as follows:

   Primary Master Configuration
   interface
       Default: 0.0.0.0 (all interfaces)

       The local interface to bind to, must be an IP address.

          interface: 192.168.0.1

   ipv6
       Default: False

       Whether  the  master  should  listen  for  IPv6  connections.  If this is set to True, the
       interface option must be adjusted too (for example: interface: '::')

          ipv6: True

   publish_port
       Default: 4505

       The network port to set up the publication interface.

          publish_port: 4505

   master_id
       Default: None

       The id to be passed in the publish job to minions. This is used for MultiSyndics to return
       the job to the requesting master.

       NOTE:
          This must be the same string as the syndic is configured with.

          master_id: MasterOfMaster

   user
       Default: root

       The user to run the Salt processes

          user: root

   enable_ssh_minions
       Default: False

       Tell the master to also use salt-ssh when running commands against minions.

          enable_ssh_minions: True

       NOTE:
          Cross-minion communication is still not possible.  The Salt mine and publish.publish do
          not work between minion types.

   ret_port
       Default: 4506

       The port used by the return server, this is the server used by Salt to  receive  execution
       returns and command executions.

          ret_port: 4506

   pidfile
       Default: /var/run/salt-master.pid

       Specify the location of the master pidfile.

          pidfile: /var/run/salt-master.pid

   root_dir
       Default: /

       The  system  root  directory  to  operate  from,  change  this  to  make  Salt run from an
       alternative root.

          root_dir: /

       NOTE:
          This directory is prepended to the  following  options:  pki_dir,  cachedir,  sock_dir,
          log_file, autosign_file, autoreject_file, pidfile, autosign_grains_dir.

   conf_file
       Default: /etc/salt/master

       The path to the master’s configuration file.

          conf_file: /etc/salt/master

   pki_dir
       Default: /etc/salt/pki/master

       The directory to store the pki authentication keys.

          pki_dir: /etc/salt/pki/master

   extension_modules
       Changed in version 2016.3.0: The default location for this directory has been moved. Prior
       to this version, the location was a directory named extmods in the Salt cachedir (on  most
       platforms,  /var/cache/salt/extmods).  It has been moved into the master cachedir (on most
       platforms, /var/cache/salt/master/extmods).

       Directory for custom modules. This directory can contain subdirectories for each of Salt’s
       module  types  such as runners, output, wheel, modules, states, returners, engines, utils,
       etc.  This path is appended to root_dir.

          extension_modules: /root/salt_extmods

   extmod_whitelist/extmod_blacklist
       New in version 2017.7.0.

       By using this dictionary, the modules that are synced to the master’s extmod  cache  using
       saltutil.sync_*  can  be  limited.  If nothing is set to a specific type, then all modules
       are accepted.  To block all modules of a specific type, whitelist an empty list.

          extmod_whitelist:
            modules:
              - custom_module
            engines:
              - custom_engine
            pillars: []

          extmod_blacklist:
            modules:
              - specific_module

       Valid options:

              · modules

              · states

              · grains

              · renderers

              · returners

              · output

              · proxy

              · runners

              · wheel

              · engines

              · queues

              · pillar

              · utils

              · sdb

              · cache

              · clouds

              · tops

              · roster

              · tokens

   module_dirs
       Default: []

       Like extension_modules, but a list of extra directories to search for Salt modules.

          module_dirs:
            - /var/cache/salt/minion/extmods

   cachedir
       Default: /var/cache/salt/master

       The location used to  store  cache  information,  particularly  the  job  information  for
       executed salt commands.

       This directory may contain sensitive data and should be protected accordingly.

          cachedir: /var/cache/salt/master

   verify_env
       Default: True

       Verify and set permissions on configuration directories at startup.

          verify_env: True

   keep_jobs
       Default: 24

       Set  the  number  of hours to keep old job information. Note that setting this option to 0
       disables the cache cleaner.

          keep_jobs: 24

   gather_job_timeout
       New in version 2014.7.0.

       Default: 10

       The number of seconds to wait when the client  is  requesting  information  about  running
       jobs.

          gather_job_timeout: 10

   timeout
       Default: 5

       Set the default timeout for the salt command and api.

   loop_interval
       Default: 60

       The  loop_interval  option controls the seconds for the master’s maintenance process check
       cycle. This process updates file server backends, cleans the job cache  and  executes  the
       scheduler.

   output
       Default: nested

       Set the default outputter used by the salt command.

   outputter_dirs
       Default: []

       A list of additional directories to search for salt outputters in.

          outputter_dirs: []

   output_file
       Default: None

       Set  the default output file used by the salt command. Default is to output to the CLI and
       not to a file. Functions the same way as the “–out-file” CLI option, only sets this  to  a
       single file for all salt commands.

          output_file: /path/output/file

   show_timeout
       Default: True

       Tell the client to show minions that have timed out.

          show_timeout: True

   show_jid
       Default: False

       Tell the client to display the jid when a job is published.

          show_jid: False

   color
       Default: True

       By default output is colored, to disable colored output set the color value to False.

          color: False

   color_theme
       Default: ""

       Specifies a path to the color theme to use for colored command line output.

          color_theme: /etc/salt/color_theme

   cli_summary
       Default: False

       When  set  to  True,  displays  a summary of the number of minions targeted, the number of
       minions returned, and the number of minions that did not return.

          cli_summary: False

   sock_dir
       Default: /var/run/salt/master

       Set the location to use for creating Unix sockets for master process communication.

          sock_dir: /var/run/salt/master

   enable_gpu_grains
       Default: False

       Enable GPU hardware data for your master. Be aware that the master can  take  a  while  to
       start up when lspci and/or dmidecode is used to populate the grains for the master.

          enable_gpu_grains: True

   job_cache
       Default: True

       The  master  maintains  a temporary job cache. While this is a great addition, it can be a
       burden on the master for larger deployments (over 5000 minions).  Disabling the job  cache
       will  make  previously  executed  jobs unavailable to the jobs system and is not generally
       recommended. Normally it is wise to make sure the master has access to a faster IO  system
       or a tmpfs is mounted to the jobs dir.

          job_cache: True

       NOTE:
          Setting the job_cache to False will not cache minion returns, but the JID directory for
          each job is still created. The creation of the JID  directories  is  necessary  because
          Salt  uses  those  directories  to  check for JID collisions. By setting this option to
          False, the job cache directory, which is /var/cache/salt/master/jobs/ by default,  will
          be smaller, but the JID directories will still be present.

          Note  that  the  keep_jobs  option can be set to a lower value, such as 1, to limit the
          number of hours jobs are stored in the job cache. (The default is 24 hours.)

          Please see the Managing the Job Cache documentation for more information.

   minion_data_cache
       Default: True

       The minion data cache is a cache of information about the minions stored  on  the  master,
       this information is primarily the pillar, grains and mine data. The data is cached via the
       cache subsystem in the Master cachedir under the name of the  minion  or  in  a  supported
       database.  The  data  is  used  to  predetermine  what  minions are expected to reply from
       executions.

          minion_data_cache: True

   cache
       Default: localfs

       Cache subsystem module to use for minion data cache.

          cache: consul

   memcache_expire_seconds
       Default: 0

       Memcache is an additional cache layer that keeps a limited amount of data fetched from the
       minion  data  cache  for  a  limited  period of time in memory that makes cache operations
       faster. It doesn’t make much sense for the localfs cache driver but helps for more complex
       drivers like consul.

       This  option sets the memcache items expiration time. By default is set to 0 that disables
       the memcache.

          memcache_expire_seconds: 30

   memcache_max_items
       Default: 1024

       Set memcache limit in items that are  bank-key  pairs.  I.e  the  list  of  minion_0/data,
       minion_0/mine,  minion_1/data contains 3 items. This value depends on the count of minions
       usually targeted in your environment. The best one could be found by analyzing  the  cache
       log with memcache_debug enabled.

          memcache_max_items: 1024

   memcache_full_cleanup
       Default: False

       If  cache  storage  got  full,  i.e. the items count exceeds the memcache_max_items value,
       memcache cleans up it’s storage. If this option set to False memcache removes the only one
       oldest  value from it’s storage.  If this set set to True memcache removes all the expired
       items and also removes the oldest one if there are no expired items.

          memcache_full_cleanup: True

   memcache_debug
       Default: False

       Enable collecting the memcache stats and log it on debug log level.  If  enabled  memcache
       collect information about how many fetch calls has been done and how many of them has been
       hit by memcache. Also it outputs the rate value that is the  result  of  division  of  the
       first  two values. This should help to choose right values for the expiration time and the
       cache size.

          memcache_debug: True

   ext_job_cache
       Default: ''

       Used to specify a default returner for all minions. When this option is set, the specified
       returner  needs  to  be properly configured and the minions will always default to sending
       returns to this returner. This will also disable the local job cache on the master.

          ext_job_cache: redis

   event_return
       New in version 2015.5.0.

       Default: ''

       Specify the returner(s) to use to log events. Each  returner  may  have  installation  and
       configuration requirements. Read the returner’s documentation.

       NOTE:
          Not  all  returners support event returns. Verify that a returner has an event_return()
          function before configuring this option with a returner.

          event_return:
            - syslog
            - splunk

   event_return_queue
       New in version 2015.5.0.

       Default: 0

       On busy systems, enabling event_returns can cause  a  considerable  load  on  the  storage
       system  for  returners. Events can be queued on the master and stored in a batched fashion
       using a single transaction for multiple events.  By default, events are not queued.

          event_return_queue: 0

   event_return_whitelist
       New in version 2015.5.0.

       Default: []

       Only return events matching tags in a whitelist.

       Changed in version 2016.11.0: Supports glob matching patterns.

          event_return_whitelist:
            - salt/master/a_tag
            - salt/run/*/ret

   event_return_blacklist
       New in version 2015.5.0.

       Default: []

       Store all event returns _except_ the tags in a blacklist.

       Changed in version 2016.11.0: Supports glob matching patterns.

          event_return_blacklist:
            - salt/master/not_this_tag
            - salt/wheel/*/ret

   max_event_size
       New in version 2014.7.0.

       Default: 1048576

       Passing very large events can cause the minion to consume large amounts  of  memory.  This
       value  tunes the maximum size of a message allowed onto the master event bus. The value is
       expressed in bytes.

          max_event_size: 1048576

   master_job_cache
       New in version 2014.7.0.

       Default: local_cache

       Specify the returner to use for the job cache. The job cache will only be interacted  with
       from the salt master and therefore does not need to be accessible from the minions.

          master_job_cache: redis

   job_cache_store_endtime
       New in version 2015.8.0.

       Default: False

       Specify whether the Salt Master should store end times for jobs as returns come in.

          job_cache_store_endtime: False

   enforce_mine_cache
       Default: False

       By-default  when  disabling the minion_data_cache mine will stop working since it is based
       on cached data, by enabling this option we explicitly enabling only the cache for the mine
       system.

          enforce_mine_cache: False

   max_minions
       Default: 0

       The  maximum  number  of minion connections allowed by the master. Use this to accommodate
       the number of minions per master if you have different  types  of  hardware  serving  your
       minions.  The  default  of  0 means unlimited connections.  Please note that this can slow
       down the authentication process a bit in large setups.

          max_minions: 100

   con_cache
       Default: False

       If max_minions is used in large  installations,  the  master  might  experience  high-load
       situations  because  of  having  to  check  the  number  of  connected  minions  for every
       authentication. This cache provides  the  minion-ids  of  all  connected  minions  to  all
       MWorker-processes and greatly improves the performance of max_minions.

          con_cache: True

   presence_events
       Default: False

       Causes  the  master  to periodically look for actively connected minions.  Presence events
       are fired on the event bus on a regular interval with a list of connected minions, as well
       as  events  with  lists  of newly connected or disconnected minions. This is a master-only
       operation that does not send executions to minions. Note, this  does  not  detect  minions
       that connect to a master via localhost.

          presence_events: False

   ping_on_rotate
       New in version 2014.7.0.

       Default: False

       By  default,  the  master AES key rotates every 24 hours. The next command following a key
       rotation will trigger a key refresh from the minion which may result in minions  which  do
       not respond to the first command after a key refresh.

       To  tell  the  master  to  ping  all  minions  immediately  after  an AES key refresh, set
       ping_on_rotate to True. This should mitigate the issue where a minion does not  appear  to
       initially respond after a key is rotated.

       Note  that  enabling  this  may  cause  high  load on the master immediately after the key
       rotation event as minions reconnect. Consider  this  carefully  if  this  salt  master  is
       managing a large number of minions.

       If  disabled,  it  is recommended to handle this event by listening for the aes_key_rotate
       event with the key tag and acting appropriately.

          ping_on_rotate: False

   transport
       Default: zeromq

       Changes the  underlying  transport  layer.  ZeroMQ  is  the  recommended  transport  while
       additional  transport  layers  are  under  development.  Supported values are zeromq, raet
       (experimental),  and  tcp  (experimental).  This  setting  has  a  significant  impact  on
       performance and should not be changed unless you know what you are doing!

          transport: zeromq

   transport_opts
       Default: {}

       (experimental)  Starts  multiple  transports and overrides options for each transport with
       the provided dictionary This setting has a significant impact on  performance  and  should
       not  be  changed  unless  you know what you are doing!  The following example shows how to
       start a TCP transport alongside a ZMQ transport.

          transport_opts:
            tcp:
              publish_port: 4605
              ret_port: 4606
            zeromq: []

   master_stats
       Default: False

       Turning on the master stats enables runtime throughput and statistics events to  be  fired
       from the master event bus. These events will report on what functions have been run on the
       master and how long these runs have, on average, taken over a given period of time.

   master_stats_event_iter
       Default: 60

       The time in seconds to fire master_stats events. This will only fire in  conjunction  with
       receiving a request to the master, idle masters will not fire these events.

   sock_pool_size
       Default: 1

       To  avoid  blocking  waiting  while writing a data to a socket, we support socket pool for
       Salt applications. For example, a job with a large number of target host  list  can  cause
       long  period blocking waiting. The option is used by ZMQ and TCP transports, and the other
       transport methods don’t need the socket pool by definition. Most of Salt tools,  including
       CLI,  are  enough  to use a single bucket of socket pool. On the other hands, it is highly
       recommended to set the size of socket pool larger than  1  for  other  Salt  applications,
       especially Salt API, which must write data to socket concurrently.

          sock_pool_size: 15

   ipc_mode
       Default: ipc

       The  ipc  strategy.  (i.e., sockets versus tcp, etc.) Windows platforms lack POSIX IPC and
       must rely on TCP based inter-process communications. ipc_mode is set to tcp by default  on
       Windows.

          ipc_mode: ipc

   tcp_master_pub_port
       Default: 4512

       The TCP port on which events for the master should be published if ipc_mode is TCP.

          tcp_master_pub_port: 4512

   tcp_master_pull_port
       Default: 4513

       The TCP port on which events for the master should be pulled if ipc_mode is TCP.

          tcp_master_pull_port: 4513

   tcp_master_publish_pull
       Default: 4514

       The TCP port on which events for the master should be pulled fom and then republished onto
       the event bus on the master.

          tcp_master_publish_pull: 4514

   tcp_master_workers
       Default: 4515

       The TCP port for mworkers to connect to on the master.

          tcp_master_workers: 4515

   auth_events
       New in version 2017.7.3.

       Default: True

       Determines whether the master will fire authentication events.  Authentication events  are
       fired when a minion performs an authentication check with the master.

          auth_events: True

   minion_data_cache_events
       New in version 2017.7.3.

       Default: True

       Determines  whether  the  master  will  fire  minion data cache events.  Minion data cache
       events are fired when a minion requests a minion data cache refresh.

          minion_data_cache_events: True

   Salt-SSH Configuration
   roster
       Default: flat

       Define the default salt-ssh roster module to use

          roster: cache

   roster_defaults
       New in version 2017.7.0.

       Default settings which will be inherited by all rosters.

          roster_defaults:
            user: daniel
            sudo: True
            priv: /root/.ssh/id_rsa
            tty: True

   roster_file
       Default: /etc/salt/roster

       Pass in an alternative location for the salt-ssh flat roster file.

          roster_file: /root/roster

   rosters
       Default: None

       Define locations for flat roster files so they can be  chosen  when  using  Salt  API.  An
       administrator  can  place  roster files into these locations. Then, when calling Salt API,
       the roster_file parameter should contain a relative path  to  these  locations.  That  is,
       roster_file=/foo/roster  will  be  resolved  as  /etc/salt/roster.d/foo/roster  etc.  This
       feature prevents passing insecure custom rosters through the Salt API.

          rosters:
           - /etc/salt/roster.d
           - /opt/salt/some/more/rosters

   ssh_passwd
       Default: ''

       The ssh password to log in with.

          ssh_passwd: ''

   ssh_port
       Default: 22

       The target system’s ssh port number.

          ssh_port: 22

   ssh_scan_ports
       Default: 22

       Comma-separated list of ports to scan.

          ssh_scan_ports: 22

   ssh_scan_timeout
       Default: 0.01

       Scanning socket timeout for salt-ssh.

          ssh_scan_timeout: 0.01

   ssh_sudo
       Default: False

       Boolean to run command via sudo.

          ssh_sudo: False

   ssh_timeout
       Default: 60

       Number of seconds to wait for a response when establishing an SSH connection.

          ssh_timeout: 60

   ssh_user
       Default: root

       The user to log in as.

          ssh_user: root

   ssh_log_file
       New in version 2016.3.5.

       Default: /var/log/salt/ssh

       Specify the log file of the salt-ssh command.

          ssh_log_file: /var/log/salt/ssh

   ssh_minion_opts
       Default: None

       Pass in minion option overrides that will be inserted into the SHIM  for  salt-ssh  calls.
       The  local minion config is not used for salt-ssh. Can be overridden on a per-minion basis
       in the roster (minion_opts)

          ssh_minion_opts:
            gpg_keydir: /root/gpg

   ssh_use_home_key
       Default: False

       Set this to True to default  to  using  ~/.ssh/id_rsa  for  salt-ssh  authentication  with
       minions

          ssh_use_home_key: False

   ssh_identities_only
       Default: False

       Set  this  to  True  to default salt-ssh to run with -o IdentitiesOnly=yes. This option is
       intended for situations where the ssh-agent offers many different  identities  and  allows
       ssh to ignore those identities and use the only one specified in options.

          ssh_identities_only: False

   ssh_list_nodegroups
       Default: {}

       List-only  nodegroups  for salt-ssh. Each group must be formed as either a comma-separated
       list, or a YAML list. This option is useful to group minions  into  easy-to-target  groups
       when  using  salt-ssh.  These  groups  can then be targeted with the normal -N argument to
       salt-ssh.

          ssh_list_nodegroups:
            groupA: minion1,minion2
            groupB: minion1,minion3

   thin_extra_mods
       Default: None

       List of additional modules, needed to be included into the Salt  Thin.   Pass  a  list  of
       importable Python modules that are typically located in the site-packages Python directory
       so they will be also always included into the Salt Thin, once generated.

   min_extra_mods
       Default: None

       Identical as thin_extra_mods, only applied to the Salt Minimal.

   Master Security Settings
   open_mode
       Default: False

       Open mode is a dangerous security feature. One problem encountered with pki authentication
       systems  is  that  keys can become “mixed up” and authentication begins to fail. Open mode
       turns off authentication and tells the master to  accept  all  authentication.  This  will
       clean  up  the  pki  keys received from the minions. Open mode should not be turned on for
       general use. Open mode should only be used for a short period of  time  to  clean  up  pki
       keys. To turn on open mode set this value to True.

          open_mode: False

   auto_accept
       Default: False

       Enable  auto_accept.  This setting will automatically accept all incoming public keys from
       minions.

          auto_accept: False

   keysize
       Default: 2048

       The size of key that should be generated when creating new keys.

          keysize: 2048

   autosign_timeout
       New in version 2014.7.0.

       Default: 120

       Time  in  minutes  that  a  incoming  public  key  with   a   matching   name   found   in
       pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys are removed
       when the master checks the minion_autosign directory. This method to auto  accept  minions
       can  be  safer than an autosign_file because the keyid record can expire and is limited to
       being an exact name match.  This should still be considered a less than secure option, due
       to the fact that trust is based on just the requesting minion id.

   autosign_file
       Default: not defined

       If  the  autosign_file  is  specified incoming keys specified in the autosign_file will be
       automatically accepted. Matches will be searched for first by string comparison,  then  by
       globbing, then by full-string regex matching.  This should still be considered a less than
       secure option, due to the fact that trust is based on just the requesting minion id.

       Changed in version 2018.3.0: For security reasons the file must  be  readonly  except  for
       it’s owner.  If permissive_pki_access is True the owning group can also have write access,
       but if Salt is running as root it  must  be  a  member  of  that  group.   A  less  strict
       requirement also existed in previous version.

   autoreject_file
       New in version 2014.1.0.

       Default: not defined

       Works like autosign_file, but instead allows you to specify minion IDs for which keys will
       automatically be rejected. Will override both membership  in  the  autosign_file  and  the
       auto_accept setting.

   autosign_grains_dir
       New in version 2018.3.0.

       Default: not defined

       If the autosign_grains_dir is specified, incoming keys from minions with grain values that
       match those defined in files in the autosign_grains_dir will  be  accepted  automatically.
       Grain values that should be accepted automatically can be defined by creating a file named
       like the corresponding grain in the autosign_grains_dir and writing the values  into  that
       file,  one  value  per  line.   Lines  starting  with a # will be ignored.  Minion must be
       configured to send the corresponding grains  on  authentication.   This  should  still  be
       considered  a  less  than  secure  option, due to the fact that trust is based on just the
       requesting minion.

       Please see the Autoaccept Minions from Grains documentation for more information.

          autosign_grains_dir: /etc/salt/autosign_grains

   permissive_pki_access
       Default: False

       Enable permissive access to the salt keys. This allows you to run the master or minion  as
       root,  but  have  a  non-root  group  be  given access to your pki_dir. To make the access
       explicit, root must belong to the group you’ve given access to. This is potentially  quite
       insecure.  If  an  autosign_file  is  specified, enabling permissive_pki_access will allow
       group access to that specific file.

          permissive_pki_access: False

   publisher_acl
       Default: {}

       Enable user accounts on the master to execute  specific  modules.  These  modules  can  be
       expressed as regular expressions.

          publisher_acl:
            fred:
              - test.ping
              - pkg.*

   publisher_acl_blacklist
       Default: {}

       Blacklist users or modules

       This example would blacklist all non sudo users, including root from running any commands.
       It would also blacklist any use of the “cmd” module.

       This is completely disabled by default.

          publisher_acl_blacklist:
            users:
              - root
              - '^(?!sudo_).*$'   #  all non sudo users
            modules:
              - cmd.*
              - test.echo

   sudo_acl
       Default: False

       Enforce publisher_acl and publisher_acl_blacklist when users have sudo access to the  salt
       command.

          sudo_acl: False

   external_auth
       Default: {}

       The  external auth system uses the Salt auth modules to authenticate and validate users to
       access areas of the Salt system.

          external_auth:
            pam:
              fred:
                - test.*

   token_expire
       Default: 43200

       Time (in seconds) for a newly generated token to live.

       Default: 12 hours

          token_expire: 43200

   token_expire_user_override
       Default: False

       Allow eauth users to specify the expiry time of the tokens they generate.

       A boolean applies to all users or a dictionary of whitelisted eauth backends and usernames
       may be given:

          token_expire_user_override:
            pam:
              - fred
              - tom
            ldap:
              - gary

   keep_acl_in_token
       Default: False

       Set  to  True to enable keeping the calculated user’s auth list in the token file. This is
       disabled by default and the auth list is calculated or requested  from  the  eauth  driver
       each time.

          keep_acl_in_token: False

   eauth_acl_module
       Default: ''

       Auth subsystem module to use to get authorized access list for a user. By default it’s the
       same module used for external authentication.

          eauth_acl_module: django

   file_recv
       Default: False

       Allow minions to push files to the master. This  is  disabled  by  default,  for  security
       purposes.

          file_recv: False

   file_recv_max_size
       New in version 2014.7.0.

       Default: 100

       Set  a  hard-limit  on the size of the files that can be pushed to the master.  It will be
       interpreted as megabytes.

          file_recv_max_size: 100

   master_sign_pubkey
       Default: False

       Sign the master auth-replies with a cryptographic signature of the  master’s  public  key.
       Please  see  the  tutorial  how to use these settings in the Multimaster-PKI with Failover
       Tutorial

          master_sign_pubkey: True

   master_sign_key_name
       Default: master_sign

       The customizable name of the signing-key-pair without suffix.

          master_sign_key_name: <filename_without_suffix>

   master_pubkey_signature
       Default: master_pubkey_signature

       The name of the file in the master’s pki-directory that holds the pre-calculated signature
       of the master’s public-key.

          master_pubkey_signature: <filename>

   master_use_pubkey_signature
       Default: False

       Instead  of  computing  the signature for each auth-reply, use a pre-calculated signature.
       The master_pubkey_signature must also be set for this.

          master_use_pubkey_signature: True

   rotate_aes_key
       Default: True

       Rotate the salt-masters AES-key when a minion-public is deleted with salt-key.  This is  a
       very  important security-setting. Disabling it will enable deleted minions to still listen
       in on the messages published by the  salt-master.   Do  not  disable  this  unless  it  is
       absolutely clear what this does.

          rotate_aes_key: True

   publish_session
       Default: 86400

       The number of seconds between AES key rotations on the master.

          publish_session: Default: 86400

   ssl
       New in version 2016.11.0.

       Default: None

       TLS/SSL  connection  options.  This  could  be  set  to  a dictionary containing arguments
       corresponding to python  ssl.wrap_socket  method.  For  details  see  Tornado  and  Python
       documentation.

       Note:  to  set  enum  arguments  values  like cert_reqs and ssl_version use constant names
       without ssl module prefix: CERT_REQUIRED or PROTOCOL_SSLv23.

          ssl:
              keyfile: <path_to_keyfile>
              certfile: <path_to_certfile>
              ssl_version: PROTOCOL_TLSv1_2

   preserve_minion_cache
       Default: False

       By default, the master deletes its cache of minion data when the key for  that  minion  is
       removed. To preserve the cache after key deletion, set preserve_minion_cache to True.

       WARNING:  This  may have security implications if compromised minions auth with a previous
       deleted minion ID.

          preserve_minion_cache: False

   allow_minion_key_revoke
       Default: True

       Controls whether a minion can request its own key revocation.  When True the  master  will
       honor  the  minion’s  request  and  revoke  its key.  When False, the master will drop the
       request and the minion’s key will remain accepted.

          allow_minion_key_revoke: False

   optimization_order
       Default: [0, 1, 2]

       In cases where Salt is distributed without .py files, this option determines the  priority
       of optimization level(s) Salt’s module loader should prefer.

       NOTE:
          This option is only supported on Python 3.5+.

          optimization_order:
            - 2
            - 0
            - 1

   Master Large Scale Tuning Settings
   max_open_files
       Default: 100000

       Each  minion  connecting  to  the  master  uses  AT  LEAST one file descriptor, the master
       subscription connection.  If  enough  minions  connect  you  might  start  seeing  on  the
       console(and then salt-master crashes):

          Too many open files (tcp_listener.cpp:335)
          Aborted (core dumped)

          max_open_files: 100000

       By  default  this  value  will be the one of ulimit -Hn, i.e., the hard limit for max open
       files.

       To set a different value than the default one,  uncomment,  and  configure  this  setting.
       Remember  that  this  value  CANNOT  be higher than the hard limit. Raising the hard limit
       depends on the OS and/or distribution, a good way to find  the  limit  is  to  search  the
       internet for something like this:

          raise max open files hard limit debian

   worker_threads
       Default: 5

       The  number  of  threads  to  start  for  receiving commands and replies from minions.  If
       minions are stalling on replies because you have many minions,  raise  the  worker_threads
       value.

       Worker  threads should not be put below 3 when using the peer system, but can drop down to
       1 worker otherwise.

       NOTE:
          When the master daemon starts, it is expected behaviour  to  see  multiple  salt-master
          processes,  even if ‘worker_threads’ is set to ‘1’. At a minimum, a controlling process
          will start along with a Publisher, an EventPublisher, and a number of MWorker processes
          will  be  started.  The number of MWorker processes is tuneable by the ‘worker_threads’
          configuration value while the others are not.

          worker_threads: 5

   pub_hwm
       Default: 1000

       The zeromq high water mark on the publisher interface.

          pub_hwm: 1000

   zmq_backlog
       Default: 1000

       The listen queue size of the ZeroMQ backlog.

          zmq_backlog: 1000

   Master Module Management
   runner_dirs
       Default: []

       Set additional directories to search for runner modules.

          runner_dirs:
            - /var/lib/salt/runners

   utils_dirs
       New in version 2018.3.0.

       Default: []

       Set additional directories to search for util modules.

          utils_dirs:
            - /var/lib/salt/utils

   cython_enable
       Default: False

       Set to true to enable Cython modules (.pyx files) to be compiled on the fly  on  the  Salt
       master.

          cython_enable: False

   Master State System Settings
   state_top
       Default: top.sls

       The  state  system  uses a “top” file to tell the minions what environment to use and what
       modules to use.  The  state_top  file  is  defined  relative  to  the  root  of  the  base
       environment. The value of “state_top” is also used for the pillar top file

          state_top: top.sls

   state_top_saltenv
       This  option  has  no default value. Set it to an environment name to ensure that only the
       top file from that environment is considered during a highstate.

       NOTE:
          Using  this  value  does  not  change  the   merging   strategy.   For   instance,   if
          top_file_merging_strategy  is  set  to merge, and state_top_saltenv is set to foo, then
          any sections for environments other than foo in the top file for  the  foo  environment
          will  be  ignored. With state_top_saltenv set to base, all states from all environments
          in the base top file will be applied, while all other top files are ignored.  The  only
          way  to  set  state_top_saltenv  to  something  other  than base and not have the other
          environments   in   the   targeted   top   file    ignored,    would    be    to    set
          top_file_merging_strategy to merge_all.

          state_top_saltenv: dev

   top_file_merging_strategy
       Changed in version 2016.11.0: A merge_all strategy has been added.

       Default: merge

       When  no  specific  fileserver  environment  (a.k.a.  saltenv)  has  been  specified for a
       highstate, all environments’ top files are inspected. This config  option  determines  how
       the SLS targets in those top files are handled.

       When  set  to  merge,  the base environment’s top file is evaluated first, followed by the
       other environments’ top files.  The  first  target  expression  (e.g.  '*')  for  a  given
       environment  is  kept, and when the same target expression is used in a different top file
       evaluated later, it is ignored.  Because base is evaluated first, it is authoritative. For
       example,  if  there  is  a target for '*' for the foo environment in both the base and foo
       environment’s top files, the one in the foo environment would be ignored. The environments
       will be evaluated in no specific order (aside from base coming first). For greater control
       over the order in which the environments are evaluated, use env_order.  Note  that,  aside
       from the base environment’s top file, any sections in top files that do not match that top
       file’s environment will be ignored. So, for example, a  section  for  the  qa  environment
       would  be  ignored if it appears in the dev environment’s top file. To keep use cases like
       this from being ignored, use the merge_all strategy.

       When set to same,  then  for  each  environment,  only  that  environment’s  top  file  is
       processed, with the others being ignored. For example, only the dev environment’s top file
       will be processed for the dev environment, and any SLS targets defined for dev in the base
       environment’s  (or  any  other  environment’s) top file will be ignored. If an environment
       does not have a top file, then the top file from the default_top config parameter will  be
       used as a fallback.

       When  set  to  merge_all,  then  all  states  in all environments in all top files will be
       applied. The order in which individual SLS files will be executed will depend on the order
       in  which  the  top  files  were  evaluated,  and the environments will be evaluated in no
       specific order. For  greater  control  over  the  order  in  which  the  environments  are
       evaluated, use env_order.

          top_file_merging_strategy: same

   env_order
       Default: []

       When  top_file_merging_strategy  is  set  to  merge, and no environment is specified for a
       highstate, this config option allows for the order in which top files are evaluated to  be
       explicitly defined.

          env_order:
            - base
            - dev
            - qa

   master_tops
       Default: {}

       The  master_tops  option replaces the external_nodes option by creating a pluggable system
       for the generation of external top data. The external_nodes option is  deprecated  by  the
       master_tops  option.   To  gain the capabilities of the classic external_nodes system, use
       the following configuration:

          master_tops:
            ext_nodes: <Shell command which returns yaml>

   renderer
       Default: yaml_jinja

       The renderer to use on the minions to render the state data.

          renderer: yaml_jinja

   userdata_template
       New in version 2016.11.4.

       Default: None

       The renderer to use for templating userdata files in salt-cloud, if the  userdata_template
       is  not set in the cloud profile. If no value is set in the cloud profile or master config
       file, no templating will be performed.

          userdata_template: jinja

   jinja_env
       New in version 2018.3.0.

       Default: {}

       jinja_env overrides the default Jinja environment options for  all  templates  except  sls
       templates.  To set the options for sls templates use jinja_sls_env.

       NOTE:
          The  Jinja2  Environment  documentation  is the official source for the default values.
          Not all the options listed in the jinja documentation can be overridden using jinja_env
          or jinja_sls_env.

       The default options are:

          jinja_env:
            block_start_string: '{%'
            block_end_string: '%}'
            variable_start_string: '{{'
            variable_end_string: '}}'
            comment_start_string: '{#'
            comment_end_string: '#}'
            line_statement_prefix:
            line_comment_prefix:
            trim_blocks: False
            lstrip_blocks: False
            newline_sequence: '\n'
            keep_trailing_newline: False

   jinja_sls_env
       New in version 2018.3.0.

       Default: {}

       jinja_sls_env  sets  the  Jinja  environment  options for sls templates.  The defaults and
       accepted options are exactly the same as they are for jinja_env.

       The default options are:

          jinja_sls_env:
            block_start_string: '{%'
            block_end_string: '%}'
            variable_start_string: '{{'
            variable_end_string: '}}'
            comment_start_string: '{#'
            comment_end_string: '#}'
            line_statement_prefix:
            line_comment_prefix:
            trim_blocks: False
            lstrip_blocks: False
            newline_sequence: '\n'
            keep_trailing_newline: False

       Example using line statements and line comments to increase ease of use:

       If your configuration options are

          jinja_sls_env:
            line_statement_prefix: '%'
            line_comment_prefix: '##'

       With these options jinja will interpret anything  after  a  %  at  the  start  of  a  line
       (ignoreing  whitespace)  as  a jinja statement and will interpret anything after a ## as a
       comment.

       This allows the following more convenient syntax to be used:

          ## (this comment will not stay once rendered)
          # (this comment remains in the rendered template)
          ## ensure all the formula services are running
          % for service in formula_services:
          enable_service_{{ service }}:
            service.running:
              name: {{ service }}
          % endfor

       The following less convenient but equivalent syntax would have to be used if you  had  not
       set the line_statement and line_comment options:

          {# (this comment will not stay once rendered) #}
          # (this comment remains in the rendered template)
          {# ensure all the formula services are running #}
          {% for service in formula_services %}
          enable_service_{{ service }}:
            service.running:
              name: {{ service }}
          {% endfor %}

   jinja_trim_blocks
       Deprecated since version 2018.3.0: Replaced by jinja_env and jinja_sls_env

       New in version 2014.1.0.

       Default: False

       If  this  is  set  to  True,  the first newline after a Jinja block is removed (block, not
       variable tag!). Defaults to False and corresponds to the Jinja environment  init  variable
       trim_blocks.

          jinja_trim_blocks: False

   jinja_lstrip_blocks
       Deprecated since version 2018.3.0: Replaced by jinja_env and jinja_sls_env

       New in version 2014.1.0.

       Default: False

       If this is set to True, leading spaces and tabs are stripped from the start of a line to a
       block.  Defaults  to  False  and  corresponds  to  the  Jinja  environment  init  variable
       lstrip_blocks.

          jinja_lstrip_blocks: False

   failhard
       Default: False

       Set the global failhard flag. This informs all states to stop running states at the moment
       a single state fails.

          failhard: False

   state_verbose
       Default: True

       Controls the verbosity of state runs. By default, the results of all states are  returned,
       but  setting  this  value  to False will cause salt to only display output for states that
       failed or states that have changes.

          state_verbose: False

   state_output
       Default: full

       The state_output setting controls which results will be output full multi line:

       · full, terse - each state will be full/terse

       · mixed - only states with errors will be full

       · changes - states with changes and errors will be full

       full_id, mixed_id, changes_id and terse_id are also allowed; when set, the state  ID  will
       be used as name in the output.

          state_output: full

   state_output_diff
       Default: False

       The  state_output_diff setting changes whether or not the output from successful states is
       returned. Useful when even the terse output of these states is cluttering the logs. Set it
       to True to ignore them.

          state_output_diff: False

   state_aggregate
       Default: False

       Automatically aggregate all states that have support for mod_aggregate by setting to True.
       Or pass a list of state module names to automatically aggregate just those types.

          state_aggregate:
            - pkg

          state_aggregate: True

   state_events
       Default: False

       Send progress events as each function in a state run completes  execution  by  setting  to
       True. Progress events are in the format salt/job/<JID>/prog/<MID>/<RUN NUM>.

          state_events: True

   yaml_utf8
       Default: False

       Enable extra routines for YAML renderer used states containing UTF characters.

          yaml_utf8: False

   runner_returns
       Default: False

       If set to True, runner jobs will be saved to job cache (defined by master_job_cache).

          runner_returns: True

   Master File Server Settings
   fileserver_backend
       Default: ['roots']

       Salt  supports  a modular fileserver backend system, this system allows the salt master to
       link directly to third party systems to gather and manage the files available to  minions.
       Multiple  backends  can  be  configured and will be searched for the requested file in the
       order in which they are defined here.  The  default  setting  only  enables  the  standard
       backend roots, which is configured using the file_roots option.

       Example:

          fileserver_backend:
            - roots
            - gitfs

       NOTE:
          For masterless Salt, this parameter must be specified in the minion config file.

   fileserver_followsymlinks
       New in version 2014.1.0.

       Default: True

       By  default, the file_server follows symlinks when walking the filesystem tree.  Currently
       this only applies to the default roots fileserver_backend.

          fileserver_followsymlinks: True

   fileserver_ignoresymlinks
       New in version 2014.1.0.

       Default: False

       If you do not want symlinks to  be  treated  as  the  files  they  are  pointing  to,  set
       fileserver_ignoresymlinks  to True. By default this is set to False. When set to True, any
       detected symlink while listing files on the Master will not be returned to the Minion.

          fileserver_ignoresymlinks: False

   fileserver_limit_traversal
       New in version 2014.1.0.

       Deprecated since version 2018.3.4: This option is now ignored. Firstly, it only  traversed
       file_roots, which means it did not work for the other fileserver backends. Secondly, since
       this option was added we have added caching to the code that traverses the file_roots (and
       gitfs, etc.), which greatly reduces the amount of traversal that is done.

       Default: False

       By default, the Salt fileserver recurses fully into all defined environments to attempt to
       find files. To limit this behavior so that the fileserver only traverses directories  with
       SLS  files  and  special Salt directories like _modules, set fileserver_limit_traversal to
       True. This might be useful for installations where a file root has a very large number  of
       files and performance is impacted.

          fileserver_limit_traversal: False

   fileserver_list_cache_time
       New in version 2014.1.0.

       Changed in version 2016.11.0: The default was changed from 30 seconds to 20.

       Default: 20

       Salt  caches  the  list  of  files/symlinks/directories  for  each  fileserver backend and
       environment as they are requested, to guard against a performance bottleneck at scale when
       many  minions  all  ask  the  fileserver  which  files  are available simultaneously. This
       configuration parameter allows for the max age of that cache to be altered.

       Set this value to 0 to disable use of this cache altogether, but keep in  mind  that  this
       may  increase  the  CPU  load  on the master when running a highstate on a large number of
       minions.

       NOTE:
          Rather than altering this configuration parameter, it  may  be  advisable  to  use  the
          fileserver.clear_list_cache runner to clear these caches.

          fileserver_list_cache_time: 5

   fileserver_verify_config
       New in version 2017.7.0.

       Default: True

       By  default,  as  the  master  starts  it  performs  some  sanity checks on the configured
       fileserver backends. If any  of  these  sanity  checks  fail  (such  as  when  an  invalid
       configuration is used), the master daemon will abort.

       To skip these sanity checks, set this option to False.

          fileserver_verify_config: False

   hash_type
       Default: sha256

       The hash_type is the hash to use when discovering the hash of a file on the master server.
       The default is sha256, but md5, sha1, sha224, sha384, and sha512 are also supported.

          hash_type: sha256

   file_buffer_size
       Default: 1048576

       The buffer size in the file server in bytes.

          file_buffer_size: 1048576

   file_ignore_regex
       Default: ''

       A regular expression (or a list of expressions) that will be matched against the file path
       before syncing the modules and states to the minions.  This includes files affected by the
       file.recurse state.  For example,  if  you  manage  your  custom  modules  and  states  in
       subversion  and  don’t want all the ‘.svn’ folders and content synced to your minions, you
       could set this to ‘/.svn($|/)’. By default nothing is ignored.

          file_ignore_regex:
            - '/\.svn($|/)'
            - '/\.git($|/)'

   file_ignore_glob
       Default ''

       A file glob (or list of file globs) that will be matched  against  the  file  path  before
       syncing the modules and states to the minions. This is similar to file_ignore_regex above,
       but works on globs instead of regex. By default nothing is ignored.

          file_ignore_glob:
            - '\*.pyc'
            - '\*/somefolder/\*.bak'
            - '\*.swp'

       NOTE:
          Vim’s .swp files are a common cause of Unicode errors in file.recurse states which  use
          templating.  Unless there is a good reason to distribute them via the fileserver, it is
          good practice to include '\*.swp' in the file_ignore_glob.

   master_roots
       Default: /srv/salt-master

       A master-only copy of the file_roots dictionary, used by the state compiler.

          master_roots: /srv/salt-master

   roots: Master’s Local File Server
   file_roots
       Default:

          base:
            - /srv/salt

       Salt runs a lightweight file server written in ZeroMQ to deliver files  to  minions.  This
       file server is built into the master daemon and does not require a dedicated port.

       The  file  server  works  on  environments passed to the master. Each environment can have
       multiple root directories. The subdirectories in the multiple  file  roots  cannot  match,
       otherwise the downloaded files will not be able to be reliably ensured. A base environment
       is required to house the top file.

       Example:

          file_roots:
            base:
              - /srv/salt
            dev:
              - /srv/salt/dev/services
              - /srv/salt/dev/states
            prod:
              - /srv/salt/prod/services
              - /srv/salt/prod/states

       NOTE:
          For masterless Salt, this parameter must be specified in the minion config file.

   roots_update_interval
       New in version 2018.3.0.

       Default: 60

       This option defines the update interval (in seconds) for file_roots.

       NOTE:
          Since file_roots consists of files local to the minion, the  update  process  for  this
          fileserver backend just reaps the cache for this backend.

          roots_update_interval: 120

   gitfs: Git Remote File Server Backend
   gitfs_remotes
       Default: []

       When  using  the  git  fileserver backend at least one git remote needs to be defined. The
       user running the salt master will need read access to the repo.

       The repos will be searched in order to find the file requested by a client and  the  first
       repo  to  have  the  file  will  return  it.  Branches  and  tags are translated into salt
       environments.

          gitfs_remotes:
            - git://github.com/saltstack/salt-states.git
            - file:///var/git/saltmaster

       NOTE:
          file:// repos will be treated as a remote and copied into the master’s gitfs cache,  so
          only the local refs for those repos will be exposed as fileserver environments.

       As  of  2014.7.0,  it  is  possible  to  have  per-repo  versions  of several of the gitfs
       configuration parameters. For more information, see the GitFS Walkthrough.

   gitfs_provider
       New in version 2014.7.0.

       Optional parameter used to specify the provider to be used for gitfs. More information can
       be found in the GitFS Walkthrough.

       Must  be either pygit2 or gitpython. If unset, then each will be tried in that same order,
       and the first one with a compatible version installed will be the provider that is used.

          gitfs_provider: gitpython

   gitfs_ssl_verify
       Default: True

       Specifies whether or  not  to  ignore  SSL  certificate  errors  when  fetching  from  the
       repositories  configured  in  gitfs_remotes. The False setting is useful if you’re using a
       git repo that uses a self-signed certificate. However, keep in mind that setting  this  to
       anything  other  True  is  a  considered  insecure,  and  using an SSH-based transport (if
       available) may be a better option.

          gitfs_ssl_verify: False

       NOTE:
          pygit2 only supports disabling SSL verification in versions 0.23.2 and newer.

       Changed in version 2015.8.0: This option can now be configured on individual  repositories
       as well. See here for more info.

       Changed in version 2016.11.0: The default config value changed from False to True.

   gitfs_mountpoint
       New in version 2014.7.0.

       Default: ''

       Specifies  a  path  on  the salt fileserver which will be prepended to all files served by
       gitfs. This option can be used in conjunction with gitfs_root. It can also  be  configured
       for an individual repository, see here for more info.

          gitfs_mountpoint: salt://foo/bar

       NOTE:
          The  salt://  protocol  designation  can  be  left  off  (in  other  words, foo/bar and
          salt://foo/bar are equivalent). Assuming a file baz.sh in the root of a  gitfs  remote,
          and   the   above   example   mountpoint,   this   file   would   be   served   up  via
          salt://foo/bar/baz.sh.

   gitfs_root
       Default: ''

       Relative path to a subdirectory within the repository from  which  Salt  should  begin  to
       serve  files.  This  is  useful  when there are files in the repository that should not be
       available to the Salt fileserver. Can be used in  conjunction  with  gitfs_mountpoint.  If
       used, then from Salt’s perspective the directories above the one specified will be ignored
       and the relative path will (for the purposes of gitfs) be considered as the  root  of  the
       repo.

          gitfs_root: somefolder/otherfolder

       Changed  in version 2014.7.0: This option can now be configured on individual repositories
       as well. See here for more info.

   gitfs_base
       Default: master

       Defines which branch/tag should be used as the base environment.

          gitfs_base: salt

       Changed in version 2014.7.0: This option can now be configured on individual  repositories
       as well. See here for more info.

   gitfs_saltenv
       New in version 2016.11.0.

       Default: []

       Global settings for per-saltenv configuration parameters. Though per-saltenv configuration
       parameters are typically one-off changes specific to a single gitfs remote, and thus  more
       often  configured on a per-remote basis, this parameter can be used to specify per-saltenv
       changes which should apply to all remotes. For example, the below configuration  will  map
       the develop branch to the dev saltenv for all gitfs remotes.

          gitfs_saltenv:
            - dev:
              - ref: develop

   gitfs_disable_saltenv_mapping
       New in version 2018.3.0.

       Default: False

       When set to True, all saltenv mapping logic is disregarded (aside from which branch/tag is
       mapped to the base saltenv). To use any other environments,  they  must  then  be  defined
       using per-saltenv configuration parameters.

          gitfs_disable_saltenv_mapping: True

       NOTE:
          This  is  is a global configuration option, see here for examples of configuring it for
          individual repositories.

   gitfs_ref_types
       New in version 2018.3.0.

       Default: ['branch', 'tag', 'sha']

       This option defines what types  of  refs  are  mapped  to  fileserver  environments  (i.e.
       saltenvs).  It  also  sets  the  order of preference when there are ambiguously-named refs
       (i.e. when a branch and tag both have the same name).  The below example disables  mapping
       of both tags and SHAs, so that only branches are mapped as saltenvs:

          gitfs_ref_types:
            - branch

       NOTE:
          This  is  is a global configuration option, see here for examples of configuring it for
          individual repositories.

       NOTE:
          sha is special in that it will not show  up  when  listing  saltenvs  (e.g.   with  the
          fileserver.envs  runner),  but works within states and with cp.cache_file to retrieve a
          file from a specific git SHA.

   gitfs_saltenv_whitelist
       New in version 2014.7.0.

       Changed in version 2018.3.0: Renamed from gitfs_env_whitelist to gitfs_saltenv_whitelist

       Default: []

       Used to restrict which environments are made available. Can speed up  state  runs  if  the
       repos  in  gitfs_remotes contain many branches/tags.  More information can be found in the
       GitFS Walkthrough.

          gitfs_saltenv_whitelist:
            - base
            - v1.*
            - 'mybranch\d+'

   gitfs_saltenv_blacklist
       New in version 2014.7.0.

       Changed in version 2018.3.0: Renamed from gitfs_env_blacklist to gitfs_saltenv_blacklist

       Default: []

       Used to restrict which environments are made available. Can speed up  state  runs  if  the
       repos  in  gitfs_remotes  contain many branches/tags. More information can be found in the
       GitFS Walkthrough.

          gitfs_saltenv_blacklist:
            - base
            - v1.*
            - 'mybranch\d+'

   gitfs_global_lock
       New in version 2015.8.9.

       Default: True

       When set to False, if there is an update lock for a gitfs remote and the pid written to it
       is  not  running on the master, the lock file will be automatically cleared and a new lock
       will be obtained. When set to True, Salt will simply log a warning when there is an update
       lock present.

       On  single-master  deployments,  disabling  this  option  can help automatically deal with
       instances where the master was shutdown/restarted during the middle  of  a  gitfs  update,
       leaving a update lock in place.

       However, on multi-master deployments with the gitfs cachedir shared via GlusterFS, nfs, or
       another network filesystem, it is strongly recommended not to disable this option as doing
       so will cause lock files to be removed if they were created by a different master.

          # Disable global lock
          gitfs_global_lock: False

   gitfs_update_interval
       New in version 2018.3.0.

       Default: 60

       This  option  defines  the  default  update  interval (in seconds) for gitfs remotes.  The
       update interval can also be set for a single repository via a per-remote config option

          gitfs_update_interval: 120

   GitFS Authentication Options
       These parameters only currently apply to the pygit2 gitfs provider. Examples of how to use
       these can be found in the GitFS Walkthrough.

   gitfs_user
       New in version 2014.7.0.

       Default: ''

       Along with gitfs_password, is used to authenticate to HTTPS remotes.

          gitfs_user: git

       NOTE:
          This  is  is a global configuration option, see here for examples of configuring it for
          individual repositories.

   gitfs_password
       New in version 2014.7.0.

       Default: ''

       Along with gitfs_user, is used to authenticate to HTTPS remotes.  This  parameter  is  not
       required if the repository does not use authentication.

          gitfs_password: mypassword

       NOTE:
          This  is  is a global configuration option, see here for examples of configuring it for
          individual repositories.

   gitfs_insecure_auth
       New in version 2014.7.0.

       Default: False

       By default, Salt will not authenticate to  an  HTTP  (non-HTTPS)  remote.  This  parameter
       enables authentication over HTTP. Enable this at your own risk.

          gitfs_insecure_auth: True

       NOTE:
          This  is  is a global configuration option, see here for examples of configuring it for
          individual repositories.

   gitfs_pubkey
       New in version 2014.7.0.

       Default: ''

       Along with gitfs_privkey (and optionally gitfs_passphrase), is used to authenticate to SSH
       remotes.  Required for SSH remotes.

          gitfs_pubkey: /path/to/key.pub

       NOTE:
          This  is  is a global configuration option, see here for examples of configuring it for
          individual repositories.

   gitfs_privkey
       New in version 2014.7.0.

       Default: ''

       Along with gitfs_pubkey (and optionally gitfs_passphrase), is used to authenticate to  SSH
       remotes.  Required for SSH remotes.

          gitfs_privkey: /path/to/key

       NOTE:
          This  is  is a global configuration option, see here for examples of configuring it for
          individual repositories.

   gitfs_passphrase
       New in version 2014.7.0.

       Default: ''

       This parameter is optional, required only when the SSH key being used to  authenticate  is
       protected by a passphrase.

          gitfs_passphrase: mypassphrase

       NOTE:
          This  is  is a global configuration option, see here for examples of configuring it for
          individual repositories.

   gitfs_refspecs
       New in version 2017.7.0.

       Default: ['+refs/heads/*:refs/remotes/origin/*', '+refs/tags/*:refs/tags/*']

       When fetching from remote repositories, by default Salt will fetch branches and tags. This
       parameter  can  be  used  to  override  the  default  and specify alternate refspecs to be
       fetched. More information on how this feature works can be found in the GitFS Walkthrough.

          gitfs_refspecs:
            - '+refs/heads/*:refs/remotes/origin/*'
            - '+refs/tags/*:refs/tags/*'
            - '+refs/pull/*/head:refs/remotes/origin/pr/*'
            - '+refs/pull/*/merge:refs/remotes/origin/merge/*'

   hgfs: Mercurial Remote File Server Backend
   hgfs_remotes
       New in version 0.17.0.

       Default: []

       When using the hg fileserver backend at least one mercurial remote needs  to  be  defined.
       The user running the salt master will need read access to the repo.

       The  repos  will be searched in order to find the file requested by a client and the first
       repo to have the file will return it. Branches and/or bookmarks are translated  into  salt
       environments, as defined by the hgfs_branch_method parameter.

          hgfs_remotes:
            - https://username@bitbucket.org/username/reponame

       NOTE:
          As   of  2014.7.0,  it  is  possible  to  have  per-repo  versions  of  the  hgfs_root,
          hgfs_mountpoint, hgfs_base, and hgfs_branch_method parameters.  For example:

              hgfs_remotes:
                - https://username@bitbucket.org/username/repo1
                  - base: saltstates
                - https://username@bitbucket.org/username/repo2:
                  - root: salt
                  - mountpoint: salt://foo/bar/baz
                - https://username@bitbucket.org/username/repo3:
                  - root: salt/states
                  - branch_method: mixed

   hgfs_branch_method
       New in version 0.17.0.

       Default: branches

       Defines the objects that will be used as fileserver environments.

       · branches - Only branches and tags will be used

       · bookmarks - Only bookmarks and tags will be used

       · mixed - Branches, bookmarks, and tags will be used

          hgfs_branch_method: mixed

       NOTE:
          Starting in version 2014.1.0, the value of the hgfs_base parameter defines which branch
          is  used  as  the  base environment, allowing for a base environment to be used with an
          hgfs_branch_method of bookmarks.

          Prior to this release, the default branch will be used as the base environment.

   hgfs_mountpoint
       New in version 2014.7.0.

       Default: ''

       Specifies a path on the salt fileserver which will be prepended to  all  files  served  by
       hgfs.  This option can be used in conjunction with hgfs_root. It can also be configured on
       a per-remote basis, see here for more info.

          hgfs_mountpoint: salt://foo/bar

       NOTE:
          The salt:// protocol  designation  can  be  left  off  (in  other  words,  foo/bar  and
          salt://foo/bar  are  equivalent). Assuming a file baz.sh in the root of an hgfs remote,
          this file would be served up via salt://foo/bar/baz.sh.

   hgfs_root
       New in version 0.17.0.

       Default: ''

       Relative path to a subdirectory within the repository from  which  Salt  should  begin  to
       serve  files.  This  is  useful  when there are files in the repository that should not be
       available to the Salt fileserver. Can be used  in  conjunction  with  hgfs_mountpoint.  If
       used, then from Salt’s perspective the directories above the one specified will be ignored
       and the relative path will (for the purposes of hgfs) be considered as  the  root  of  the
       repo.

          hgfs_root: somefolder/otherfolder

       Changed  in  version  2014.7.0:  Ability  to  specify hgfs roots on a per-remote basis was
       added. See here for more info.

   hgfs_base
       New in version 2014.1.0.

       Default: default

       Defines  which  branch  should  be  used  as  the  base  environment.   Change   this   if
       hgfs_branch_method  is  set  to  bookmarks to specify which bookmark should be used as the
       base environment.

          hgfs_base: salt

   hgfs_saltenv_whitelist
       New in version 2014.7.0.

       Changed in version 2018.3.0: Renamed from hgfs_env_whitelist to hgfs_saltenv_whitelist

       Default: []

       Used to restrict which environments are made available. Can speed up state  runs  if  your
       hgfs  remotes  contain  many  branches/bookmarks/tags.  Full  names,  globs,  and  regular
       expressions are supported. If using a regular expression, the expression  must  match  the
       entire minion ID.

       If used, only branches/bookmarks/tags which match one of the specified expressions will be
       exposed as fileserver environments.

       If   used   in   conjunction   with   hgfs_saltenv_blacklist,   then   the    subset    of
       branches/bookmarks/tags  which  match the whitelist but do not match the blacklist will be
       exposed as fileserver environments.

          hgfs_saltenv_whitelist:
            - base
            - v1.*
            - 'mybranch\d+'

   hgfs_saltenv_blacklist
       New in version 2014.7.0.

       Changed in version 2018.3.0: Renamed from hgfs_env_blacklist to hgfs_saltenv_blacklist

       Default: []

       Used to restrict which environments are made available. Can speed up state  runs  if  your
       hgfs  remotes  contain  many  branches/bookmarks/tags.  Full  names,  globs,  and  regular
       expressions are supported. If using a regular expression, the expression  must  match  the
       entire minion ID.

       If  used, branches/bookmarks/tags which match one of the specified expressions will not be
       exposed as fileserver environments.

       If   used   in   conjunction   with   hgfs_saltenv_whitelist,   then   the    subset    of
       branches/bookmarks/tags  which  match the whitelist but do not match the blacklist will be
       exposed as fileserver environments.

          hgfs_saltenv_blacklist:
            - base
            - v1.*
            - 'mybranch\d+'

   hgfs_update_interval
       New in version 2018.3.0.

       Default: 60

       This option defines the update interval (in seconds) for hgfs_remotes.

          hgfs_update_interval: 120

   svnfs: Subversion Remote File Server Backend
   svnfs_remotes
       New in version 0.17.0.

       Default: []

       When using the svn fileserver backend at least one subversion remote needs to be  defined.
       The user running the salt master will need read access to the repo.

       The  repos  will be searched in order to find the file requested by a client and the first
       repo to have the file will return it. The trunk, branches, and tags  become  environments,
       with the trunk being the base environment.

          svnfs_remotes:
            - svn://foo.com/svn/myproject

       NOTE:
          As of 2014.7.0, it is possible to have per-repo versions of the following configuration
          parameters:

          · svnfs_root

          · svnfs_mountpoint

          · svnfs_trunk

          · svnfs_branches

          · svnfs_tags

          For example:

              svnfs_remotes:
                - svn://foo.com/svn/project1
                - svn://foo.com/svn/project2:
                  - root: salt
                  - mountpoint: salt://foo/bar/baz
                - svn//foo.com/svn/project3:
                  - root: salt/states
                  - branches: branch
                  - tags: tag

   svnfs_mountpoint
       New in version 2014.7.0.

       Default: ''

       Specifies a path on the salt fileserver which will be prepended to  all  files  served  by
       hgfs. This option can be used in conjunction with svnfs_root. It can also be configured on
       a per-remote basis, see here for more info.

          svnfs_mountpoint: salt://foo/bar

       NOTE:
          The salt:// protocol  designation  can  be  left  off  (in  other  words,  foo/bar  and
          salt://foo/bar  are equivalent). Assuming a file baz.sh in the root of an svnfs remote,
          this file would be served up via salt://foo/bar/baz.sh.

   svnfs_root
       New in version 0.17.0.

       Default: ''

       Relative path to a subdirectory within the repository from  which  Salt  should  begin  to
       serve  files.  This  is  useful  when there are files in the repository that should not be
       available to the Salt fileserver. Can be used in  conjunction  with  svnfs_mountpoint.  If
       used, then from Salt’s perspective the directories above the one specified will be ignored
       and the relative path will (for the purposes of svnfs) be considered as the  root  of  the
       repo.

          svnfs_root: somefolder/otherfolder

       Changed  in  version  2014.7.0:  Ability  to specify svnfs roots on a per-remote basis was
       added. See here for more info.

   svnfs_trunk
       New in version 2014.7.0.

       Default: trunk

       Path relative to the root of the repository where  the  trunk  is  located.  Can  also  be
       configured on a per-remote basis, see here for more info.

          svnfs_trunk: trunk

   svnfs_branches
       New in version 2014.7.0.

       Default: branches

       Path  relative  to  the root of the repository where the branches are located. Can also be
       configured on a per-remote basis, see here for more info.

          svnfs_branches: branches

   svnfs_tags
       New in version 2014.7.0.

       Default: tags

       Path relative to the root of the repository where  the  tags  are  located.  Can  also  be
       configured on a per-remote basis, see here for more info.

          svnfs_tags: tags

   svnfs_saltenv_whitelist
       New in version 2014.7.0.

       Changed in version 2018.3.0: Renamed from svnfs_env_whitelist to svnfs_saltenv_whitelist

       Default: []

       Used  to  restrict  which environments are made available. Can speed up state runs if your
       svnfs remotes contain many branches/tags. Full names, globs, and regular  expressions  are
       supported. If using a regular expression, the expression must match the entire minion ID.

       If  used,  only branches/tags which match one of the specified expressions will be exposed
       as fileserver environments.

       If used in conjunction with svnfs_saltenv_blacklist,  then  the  subset  of  branches/tags
       which  match  the  whitelist  but do not match the blacklist will be exposed as fileserver
       environments.

          svnfs_saltenv_whitelist:
            - base
            - v1.*
            - 'mybranch\d+'

   svnfs_saltenv_blacklist
       New in version 2014.7.0.

       Changed in version 2018.3.0: Renamed from svnfs_env_blacklist to svnfs_saltenv_blacklist

       Default: []

       Used to restrict which environments are made available. Can speed up state  runs  if  your
       svnfs  remotes  contain many branches/tags. Full names, globs, and regular expressions are
       supported. If using a regular expression, the expression must match the entire minion ID.

       If used, branches/tags which match one of the specified expressions will not be exposed as
       fileserver environments.

       If  used  in  conjunction  with  svnfs_saltenv_whitelist, then the subset of branches/tags
       which match the whitelist but do not match the blacklist will  be  exposed  as  fileserver
       environments.

          svnfs_saltenv_blacklist:
            - base
            - v1.*
            - 'mybranch\d+'

   svnfs_update_interval
       New in version 2018.3.0.

       Default: 60

       This option defines the update interval (in seconds) for svnfs_remotes.

          svnfs_update_interval: 120

   minionfs: MinionFS Remote File Server Backend
   minionfs_env
       New in version 2014.7.0.

       Default: base

       Environment from which MinionFS files are made available.

          minionfs_env: minionfs

   minionfs_mountpoint
       New in version 2014.7.0.

       Default: ''

       Specifies a path on the salt fileserver from which minionfs files are served.

          minionfs_mountpoint: salt://foo/bar

       NOTE:
          The  salt://  protocol  designation  can  be  left  off  (in  other  words, foo/bar and
          salt://foo/bar are equivalent).

   minionfs_whitelist
       New in version 2014.7.0.

       Default: []

       Used to restrict which minions’ pushed files are exposed via minionfs. If using a  regular
       expression, the expression must match the entire minion ID.

       If  used,  only the pushed files from minions which match one of the specified expressions
       will be exposed.

       If used in conjunction with minionfs_blacklist, then the subset of hosts which  match  the
       whitelist but do not match the blacklist will be exposed.

          minionfs_whitelist:
            - server01
            - dev*
            - 'mail\d+.mydomain.tld'

   minionfs_blacklist
       New in version 2014.7.0.

       Default: []

       Used  to restrict which minions’ pushed files are exposed via minionfs. If using a regular
       expression, the expression must match the entire minion ID.

       If used, only the pushed files from minions which match one of the  specified  expressions
       will not be exposed.

       If  used  in conjunction with minionfs_whitelist, then the subset of hosts which match the
       whitelist but do not match the blacklist will be exposed.

          minionfs_blacklist:
            - server01
            - dev*
            - 'mail\d+.mydomain.tld'

   minionfs_update_interval
       New in version 2018.3.0.

       Default: 60

       This option defines the update interval (in seconds) for MinionFS.

       NOTE:
          Since MinionFS consists of files local to the  master,  the  update  process  for  this
          fileserver backend just reaps the cache for this backend.

          minionfs_update_interval: 120

   azurefs: Azure File Server Backend
       New in version 2015.8.0.

       See the azurefs documentation for usage examples.

   azurefs_update_interval
       New in version 2018.3.0.

       Default: 60

       This option defines the update interval (in seconds) for azurefs.

          azurefs_update_interval: 120

   s3fs: S3 File Server Backend
       New in version 0.16.0.

       See the s3fs documentation for usage examples.

   s3fs_update_interval
       New in version 2018.3.0.

       Default: 60

       This option defines the update interval (in seconds) for s3fs.

          s3fs_update_interval: 120

   Pillar Configuration
   pillar_roots
       Default:

          base:
            - /srv/pillar

       Set  the  environments and directories used to hold pillar sls data. This configuration is
       the same as file_roots:

          pillar_roots:
            base:
              - /srv/pillar
            dev:
              - /srv/pillar/dev
            prod:
              - /srv/pillar/prod

   on_demand_ext_pillar
       New in version 2016.3.6,2016.11.3,2017.7.0.

       Default: ['libvirt', 'virtkey']

       The external pillars permitted to be used on-demand using pillar.ext.

          on_demand_ext_pillar:
            - libvirt
            - virtkey
            - git

       WARNING:
          This will allow minions to request specific pillar data  via  pillar.ext,  and  may  be
          considered  a security risk. However, pillar data generated in this way will not affect
          the  in-memory  pillar  data,  so  this  risk  is  limited  to   instances   in   which
          states/modules/etc. (built-in or custom) rely upon pillar data generated by pillar.ext.

   decrypt_pillar
       New in version 2017.7.0.

       Default: []

       A list of paths to be recursively decrypted during pillar compilation.

          decrypt_pillar:
            - 'foo:bar': gpg
            - 'lorem:ipsum:dolor'

       Entries  in  this list can be formatted either as a simple string, or as a key/value pair,
       with the key being the pillar location, and the value being the renderer to use for pillar
       decryption.  If  the former is used, the renderer specified by decrypt_pillar_default will
       be used.

   decrypt_pillar_delimiter
       New in version 2017.7.0.

       Default: :

       The delimiter used to distinguish nested data structures in the decrypt_pillar option.

          decrypt_pillar_delimiter: '|'
          decrypt_pillar:
            - 'foo|bar': gpg
            - 'lorem|ipsum|dolor'

   decrypt_pillar_default
       New in version 2017.7.0.

       Default: gpg

       The default renderer used for decryption, if one is not specified for a given  pillar  key
       in decrypt_pillar.

          decrypt_pillar_default: my_custom_renderer

   decrypt_pillar_renderers
       New in version 2017.7.0.

       Default: ['gpg']

       List of renderers which are permitted to be used for pillar decryption.

          decrypt_pillar_renderers:
            - gpg
            - my_custom_renderer

   pillar_opts
       Default: False

       The  pillar_opts  option  adds  the master configuration file data to a dict in the pillar
       called master. This can be used to set simple configurations in  the  master  config  file
       that can then be used on minions.

       Note that setting this option to True means the master config file will be included in all
       minion’s pillars. While this makes global configuration of services and systems  easy,  it
       may not be desired if sensitive data is stored in the master configuration.

          pillar_opts: False

   pillar_safe_render_error
       Default: True

       The  pillar_safe_render_error option prevents the master from passing pillar render errors
       to the minion. This is set on by default because the error could contain  templating  data
       which would give that minion information it shouldn’t have, like a password! When set True
       the error message will only show:

          Rendering SLS 'my.sls' failed. Please see master log for details.

          pillar_safe_render_error: True

   ext_pillar
       The ext_pillar option allows for any number of external pillar  interfaces  to  be  called
       when  populating  pillar  data.  The  configuration  is based on ext_pillar functions. The
       available ext_pillar functions can be found herein:

       https://github.com/saltstack/salt/blob/develop/salt/pillar

       By default, the ext_pillar interface is not configured to run.

       Default: []

          ext_pillar:
            - hiera: /etc/hiera.yaml
            - cmd_yaml: cat /etc/salt/yaml
            - reclass:
                inventory_base_uri: /etc/reclass

       There are additional details at salt-pillars

   ext_pillar_first
       New in version 2015.5.0.

       Default: False

       This option allows for external  pillar  sources  to  be  evaluated  before  pillar_roots.
       External  pillar data is evaluated separately from pillar_roots pillar data, and then both
       sets of pillar data are merged into a single pillar  dictionary,  so  the  value  of  this
       config  option  will have an impact on which key “wins” when there is one of the same name
       in both the external pillar data and pillar_roots pillar data. By setting this  option  to
       True,  ext_pillar  keys will be overridden by pillar_roots, while leaving it as False will
       allow ext_pillar keys to override those from pillar_roots.

       NOTE:
          For a while, this config option did not work as specified above, because of  a  bug  in
          Pillar compilation. This bug has been resolved in version 2016.3.4 and later.

          ext_pillar_first: False

   pillarenv_from_saltenv
       Default: False

       When  set to True, the pillarenv value will assume the value of the effective saltenv when
       running states. This essentially makes salt-run pillar.show_pillar saltenv=dev  equivalent
       to  salt-run pillar.show_pillar saltenv=dev pillarenv=dev. If pillarenv is set on the CLI,
       it will override this option.

          pillarenv_from_saltenv: True

       NOTE:
          For  salt  remote  execution  commands  this  option  should  be  set  in  the   Minion
          configuration instead.

   pillar_raise_on_missing
       New in version 2015.5.0.

       Default: False

       Set this option to True to force a KeyError to be raised whenever an attempt to retrieve a
       named value from pillar fails. When this option  is  set  to  False,  the  failed  attempt
       returns an empty string.

   Git External Pillar (git_pillar) Configuration Options
   git_pillar_provider
       New in version 2015.8.0.

       Specify  the  provider  to  be used for git_pillar. Must be either pygit2 or gitpython. If
       unset, then both will be tried in that same order, and the first  one  with  a  compatible
       version installed will be the provider that is used.

          git_pillar_provider: gitpython

   git_pillar_base
       New in version 2015.8.0.

       Default: master

       If  the  desired  branch  matches  this  value,  and  the  environment is omitted from the
       git_pillar configuration, then the environment for that git_pillar remote  will  be  base.
       For  example, in the configuration below, the foo branch/tag would be assigned to the base
       environment, while bar would be mapped to the bar environment.

          git_pillar_base: foo

          ext_pillar:
            - git:
              - foo https://mygitserver/git-pillar.git
              - bar https://mygitserver/git-pillar.git

   git_pillar_branch
       New in version 2015.8.0.

       Default: master

       If the branch is omitted from a git_pillar remote, then this branch will be used  instead.
       For  example,  in  the configuration below, the first two remotes would use the pillardata
       branch/tag, while the third would use the foo branch/tag.

          git_pillar_branch: pillardata

          ext_pillar:
            - git:
              - https://mygitserver/pillar1.git
              - https://mygitserver/pillar2.git:
                - root: pillar
              - foo https://mygitserver/pillar3.git

   git_pillar_env
       New in version 2015.8.0.

       Default: '' (unset)

       Environment to use for git_pillar remotes. This is normally derived  from  the  branch/tag
       (or  from  a  per-remote  env  parameter),  but  if  set this will override the process of
       deriving the env from the branch/tag name. For example, in the configuration below the foo
       branch  would  be  assigned  to  the  base environment, while the bar branch would need to
       explicitly have bar configured as it’s environment to keep it from also  being  mapped  to
       the base environment.

          git_pillar_env: base

          ext_pillar:
            - git:
              - foo https://mygitserver/git-pillar.git
              - bar https://mygitserver/git-pillar.git:
                - env: bar

       For  this  reason,  this option is recommended to be left unset, unless the use case calls
       for all (or almost all) of the git_pillar remotes to use the same environment irrespective
       of the branch/tag being used.

   git_pillar_root
       New in version 2015.8.0.

       Default: ''

       Path  relative  to  the root of the repository where the git_pillar top file and SLS files
       are located. In the below configuration, the pillar top file and SLS files would be looked
       for in a subdirectory called pillar.

          git_pillar_root: pillar

          ext_pillar:
            - git:
              - master https://mygitserver/pillar1.git
              - master https://mygitserver/pillar2.git

       NOTE:
          This is a global option. If only one or two repos need to have their files sourced from
          a subdirectory, then git_pillar_root can be omitted and the root can be specified on  a
          per-remote basis, like so:

              ext_pillar:
                - git:
                  - master https://mygitserver/pillar1.git
                  - master https://mygitserver/pillar2.git:
                    - root: pillar

          In this example, for the first remote the top file and SLS files would be looked for in
          the root of the repository, while in  the  second  remote  the  pillar  data  would  be
          retrieved from the pillar subdirectory.

   git_pillar_ssl_verify
       New in version 2015.8.0.

       Changed in version 2016.11.0.

       Default: False

       Specifies  whether  or  not  to  ignore  SSL certificate errors when contacting the remote
       repository. The False setting is useful if you’re using a git repo that uses a self-signed
       certificate.  However,  keep  in  mind  that  setting  this  to  anything  other True is a
       considered insecure, and using an SSH-based transport  (if  available)  may  be  a  better
       option.

       In the 2016.11.0 release, the default config value changed from False to True.

          git_pillar_ssl_verify: True

       NOTE:
          pygit2 only supports disabling SSL verification in versions 0.23.2 and newer.

   git_pillar_global_lock
       New in version 2015.8.9.

       Default: True

       When set to False, if there is an update/checkout lock for a git_pillar remote and the pid
       written to it is not running on the master, the lock file will  be  automatically  cleared
       and  a  new  lock  will be obtained. When set to True, Salt will simply log a warning when
       there is an lock present.

       On single-master deployments, disabling this  option  can  help  automatically  deal  with
       instances  where  the  master  was  shutdown/restarted  during  the middle of a git_pillar
       update/checkout, leaving a lock in place.

       However, on multi-master deployments with the git_pillar cachedir  shared  via  GlusterFS,
       nfs,  or another network filesystem, it is strongly recommended not to disable this option
       as doing so will cause lock files to be removed  if  they  were  created  by  a  different
       master.

          # Disable global lock
          git_pillar_global_lock: False

   git_pillar_includes
       New in version 2017.7.0.

       Default: True

       Normally,  when  processing  git_pillar  remotes, if more than one repo under the same git
       section in the ext_pillar configuration refers to the same pillar environment,  then  each
       repo in a given environment will have access to the other repos’ files to be referenced in
       their top files. However, it may be desirable to disable this behavior. If  so,  set  this
       value to False.

       For  a  more  detailed  examination  of  how  includes work, see this explanation from the
       git_pillar documentation.

          git_pillar_includes: False

   Git External Pillar Authentication Options
       These parameters only currently apply to the  pygit2  git_pillar_provider.  Authentication
       works  the  same  as  it  does  in gitfs, as outlined in the GitFS Walkthrough, though the
       global configuration options are named differently to reflect that they are for git_pillar
       instead of gitfs.

   git_pillar_user
       New in version 2015.8.0.

       Default: ''

       Along with git_pillar_password, is used to authenticate to HTTPS remotes.

          git_pillar_user: git

   git_pillar_password
       New in version 2015.8.0.

       Default: ''

       Along  with  git_pillar_user,  is used to authenticate to HTTPS remotes. This parameter is
       not required if the repository does not use authentication.

          git_pillar_password: mypassword

   git_pillar_insecure_auth
       New in version 2015.8.0.

       Default: False

       By default, Salt will not authenticate to  an  HTTP  (non-HTTPS)  remote.  This  parameter
       enables authentication over HTTP. Enable this at your own risk.

          git_pillar_insecure_auth: True

   git_pillar_pubkey
       New in version 2015.8.0.

       Default: ''

       Along   with   git_pillar_privkey  (and  optionally  git_pillar_passphrase),  is  used  to
       authenticate to SSH remotes.

          git_pillar_pubkey: /path/to/key.pub

   git_pillar_privkey
       New in version 2015.8.0.

       Default: ''

       Along  with  git_pillar_pubkey  (and  optionally  git_pillar_passphrase),   is   used   to
       authenticate to SSH remotes.

          git_pillar_privkey: /path/to/key

   git_pillar_passphrase
       New in version 2015.8.0.

       Default: ''

       This  parameter  is optional, required only when the SSH key being used to authenticate is
       protected by a passphrase.

          git_pillar_passphrase: mypassphrase

   git_pillar_refspecs
       New in version 2017.7.0.

       Default: ['+refs/heads/*:refs/remotes/origin/*', '+refs/tags/*:refs/tags/*']

       When fetching from remote repositories, by default Salt will fetch branches and tags. This
       parameter  can  be  used  to  override  the  default  and specify alternate refspecs to be
       fetched. This parameter works similarly to its  GitFS  counterpart,  in  that  it  can  be
       configured both globally and for individual remotes.

          git_pillar_refspecs:
            - '+refs/heads/*:refs/remotes/origin/*'
            - '+refs/tags/*:refs/tags/*'
            - '+refs/pull/*/head:refs/remotes/origin/pr/*'
            - '+refs/pull/*/merge:refs/remotes/origin/merge/*'

   git_pillar_verify_config
       New in version 2017.7.0.

       Default: True

       By  default,  as  the  master  starts  it  performs  some  sanity checks on the configured
       git_pillar repositories. If any of these sanity checks  fail  (such  as  when  an  invalid
       configuration is used), the master daemon will abort.

       To skip these sanity checks, set this option to False.

          git_pillar_verify_config: False

   Pillar Merging Options
   pillar_source_merging_strategy
       New in version 2014.7.0.

       Default: smart

       The pillar_source_merging_strategy option allows you to configure merging strategy between
       different sources. It accepts 5 values:

       · none:

         It will not do any merging at all and  only  parse  the  pillar  data  from  the  passed
         environment and ‘base’ if no environment was specified.

         New in version 2016.3.4.

       · recurse:

         It will recursively merge data. For example, theses 2 sources:

            foo: 42
            bar:
                element1: True

            bar:
                element2: True
            baz: quux

         will be merged as:

            foo: 42
            bar:
                element1: True
                element2: True
            baz: quux

       · aggregate:

         instructs aggregation of elements between sources that use the #!yamlex renderer.

         For example, these two documents:

            #!yamlex
            foo: 42
            bar: !aggregate {
              element1: True
            }
            baz: !aggregate quux

            #!yamlex
            bar: !aggregate {
              element2: True
            }
            baz: !aggregate quux2

         will be merged as:

            foo: 42
            bar:
              element1: True
              element2: True
            baz:
              - quux
              - quux2

       · overwrite:

         Will use the behaviour of the 2014.1 branch and earlier.

         Overwrites elements according the order in which they are processed.

         First pillar processed:

            A:
              first_key: blah
              second_key: blah

         Second pillar processed:

            A:
              third_key: blah
              fourth_key: blah

         will be merged as:

            A:
              third_key: blah
              fourth_key: blah

       · smart (default):

         Guesses the best strategy based on the “renderer” setting.

   pillar_merge_lists
       New in version 2015.8.0.

       Default: False

       Recursively merge lists by aggregating them instead of replacing them.

          pillar_merge_lists: False

   pillar_includes_override_sls
       New in version 2017.7.6,2018.3.1.

       Default: False

       Prior  to version 2017.7.3, keys from pillar includes would be merged on top of the pillar
       SLS. Since 2017.7.3, the includes are merged together and then the pillar SLS is merged on
       top of that.

       Set this option to True to return to the old behavior.

          pillar_includes_override_sls: True

   Pillar Cache Options
   pillar_cache
       New in version 2015.8.8.

       Default: False

       A master can cache pillars locally to bypass the expense of having to render them for each
       minion on every request. This feature  should  only  be  enabled  in  cases  where  pillar
       rendering  time  is  known  to be unsatisfactory and any attendant security concerns about
       storing pillars in a master cache have been addressed.

       When enabling this feature, be certain  to  read  through  the  additional  pillar_cache_*
       configuration options to fully understand the tunable parameters and their implications.

          pillar_cache: False

       NOTE:
          Setting pillar_cache: True has no effect on targeting minions with pillar.

   pillar_cache_ttl
       New in version 2015.8.8.

       Default: 3600

       If  and  only if a master has set pillar_cache: True, the cache TTL controls the amount of
       time, in seconds, before the cache is considered invalid by a master and a fresh pillar is
       recompiled and stored.

   pillar_cache_backend
       New in version 2015.8.8.

       Default: disk

       If an only if a master has set pillar_cache: True, one of several storage providers can be
       utilized:

       · disk (default):

         The default storage backend. This caches rendered pillars to the master cache.  Rendered
         pillars  are  serialized  and  deserialized  as msgpack structures for speed.  Note that
         pillars are stored UNENCRYPTED.  Ensure  that  the  master  cache  has  permissions  set
         appropriately (sane defaults are provided).

       · memory [EXPERIMENTAL]:

         An  optional backend for pillar caches which uses a pure-Python in-memory data structure
         for maximal performance. There are several caveats, however. First, because each  master
         worker  contains  its  own  in-memory  cache, there is no guarantee of cache consistency
         between minion requests. This works best in situations where the pillar rarely  if  ever
         changes.  Secondly,  and  perhaps  more importantly, this means that unencrypted pillars
         will be accessible to any process which can examine the memory of the salt-master!  This
         may represent a substantial security risk.

          pillar_cache_backend: disk

   Master Reactor Settings
   reactor
       Default: []

       Defines a salt reactor. See the Reactor documentation for more information.

          reactor:
            - 'salt/minion/*/start':
              - salt://reactor/startup_tasks.sls

   reactor_refresh_interval
       Default: 60

       The TTL for the cache of the reactor configuration.

          reactor_refresh_interval: 60

   reactor_worker_threads
       Default: 10

       The number of workers for the runner/wheel in the reactor.

          reactor_worker_threads: 10

   reactor_worker_hwm
       Default: 10000

       The queue size for workers in the reactor.

          reactor_worker_hwm: 10000

   Salt-API Master Settings
       There are some settings for salt-api that can be configured on the Salt Master.

   api_logfile
       Default: /var/log/salt/api

       The logfile location for salt-api.

          api_logfile: /var/log/salt/api

   api_pidfile
       Default: /var/run/salt-api.pid

       If this master will be running salt-api, specify the pidfile of the salt-api daemon.

          api_pidfile: /var/run/salt-api.pid

   rest_timeout
       Default: 300

       Used by salt-api for the master requests timeout.

          rest_timeout: 300

   Syndic Server Settings
       A  Salt syndic is a Salt master used to pass commands from a higher Salt master to minions
       below the syndic. Using the syndic is simple. If this is a master that  will  have  syndic
       servers(s) below it, set the order_masters setting to True.

       If this is a master that will be running a syndic daemon for passthrough the syndic_master
       setting needs to be set to the location of the master server.

       Do not forget that, in other words, it means that it shares with the local minion  its  ID
       and PKI directory.

   order_masters
       Default: False

       Extra  data  needs to be sent with publications if the master is controlling a lower level
       master via a syndic minion. If this is the case the order_masters value  must  be  set  to
       True

          order_masters: False

   syndic_master
       Changed in version 2016.3.5,2016.11.1: Set default higher level master address.

       Default: masterofmasters

       If  this  master  will  be  running  the  salt-syndic to connect to a higher level master,
       specify the higher level master with this configuration value.

          syndic_master: masterofmasters

       You can optionally connect a syndic to  multiple  higher  level  masters  by  setting  the
       syndic_master value to a list:

          syndic_master:
            - masterofmasters1
            - masterofmasters2

       Each higher level master must be set up in a multi-master configuration.

   syndic_master_port
       Default: 4506

       If  this  master  will  be  running  the  salt-syndic to connect to a higher level master,
       specify the higher level master port with this configuration value.

          syndic_master_port: 4506

   syndic_pidfile
       Default: /var/run/salt-syndic.pid

       If this master will be running the salt-syndic  to  connect  to  a  higher  level  master,
       specify the pidfile of the syndic daemon.

          syndic_pidfile: /var/run/syndic.pid

   syndic_log_file
       Default: /var/log/salt/syndic

       If  this  master  will  be  running  the  salt-syndic to connect to a higher level master,
       specify the log file of the syndic daemon.

          syndic_log_file: /var/log/salt-syndic.log

   syndic_failover
       New in version 2016.3.0.

       Default: random

       The behaviour of the multi-syndic when connection to a  master  of  masters  failed.   Can
       specify  random (default) or ordered. If set to random, masters will be iterated in random
       order. If ordered is specified, the configured order will be used.

          syndic_failover: random

   syndic_wait
       Default: 5

       The number of seconds for the salt client to wait for additional syndics to check in  with
       their lists of expected minions before giving up.

          syndic_wait: 5

   syndic_forward_all_events
       New in version 2017.7.0.

       Default: False

       Option  on  multi-syndic  or  single when connected to multiple masters to be able to send
       events to all connected masters.

          syndic_forward_all_events: False

   Peer Publish Settings
       Salt minions can send commands to other minions, but only if the minion is allowed to.  By
       default  “Peer  Publication”  is  disabled,  and  when  enabled it is enabled for specific
       minions and specific commands. This allows secure compartmentalization of  commands  based
       on individual minions.

   peer
       Default: {}

       The  configuration  uses  regular  expressions to match minions and then a list of regular
       expressions to match functions. The following  will  allow  the  minion  authenticated  as
       foo.example.com to execute functions from the test and pkg modules.

          peer:
            foo.example.com:
                - test.*
                - pkg.*

       This will allow all minions to execute all commands:

          peer:
            .*:
                - .*

       This is not recommended, since it would allow anyone who gets root on any single minion to
       instantly have root on all of the minions!

       By adding an additional layer you can limit the target hosts in addition to the accessible
       commands:

          peer:
            foo.example.com:
              'db*':
                - test.*
                - pkg.*

   peer_run
       Default: {}

       The  peer_run  option is used to open up runners on the master to access from the minions.
       The peer_run configuration matches the format of the peer configuration.

       The following example would allow foo.example.com to execute the manage.up runner:

          peer_run:
            foo.example.com:
                - manage.up

   Master Logging Settings
   log_file
       Default: /var/log/salt/master

       The master log can be sent to a regular file, local path name, or  network  location.  See
       also log_file.

       Examples:

          log_file: /var/log/salt/master

          log_file: file:///dev/log

          log_file: udp://loghost:10514

   log_level
       Default: warning

       The level of messages to send to the console. See also log_level.

          log_level: warning

   log_level_logfile
       Default: warning

       The  level of messages to send to the log file. See also log_level_logfile. When it is not
       set explicitly it will inherit the level set by log_level option.

          log_level_logfile: warning

   log_datefmt
       Default: %H:%M:%S

       The date and time format used in console log messages. See also log_datefmt.

          log_datefmt: '%H:%M:%S'

   log_datefmt_logfile
       Default: %Y-%m-%d %H:%M:%S

       The date and time format used in log file messages. See also log_datefmt_logfile.

          log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'

   log_fmt_console
       Default: [%(levelname)-8s] %(message)s

       The format of the console logging messages. See also log_fmt_console.

       NOTE:
          Log colors are enabled in log_fmt_console  rather  than  the  color  config  since  the
          logging system is loaded before the master config.

          Console log colors are specified by these additional formatters:

          %(colorlevel)s %(colorname)s %(colorprocess)s %(colormsg)s

          Since it is desirable to include the surrounding brackets, ‘[‘ and ‘]’, in the coloring
          of the messages, these color formatters also include padding as well.  Color  LogRecord
          attributes are only available for console logging.

          log_fmt_console: '%(colorlevel)s %(colormsg)s'
          log_fmt_console: '[%(levelname)-8s] %(message)s'

   log_fmt_logfile
       Default: %(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s

       The format of the log file logging messages. See also log_fmt_logfile.

          log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s'

   log_granular_levels
       Default: {}

       This   can   be   used   to   control   logging   levels   more   specifically.  See  also
       log_granular_levels.

   Node Groups
   nodegroups
       Default: {}

       Node groups allow for logical groupings of minion nodes.  A group consists of a group name
       and a compound target.

          nodegroups:
            group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com'
            group2: 'G@os:Debian and foo.domain.com'
            group3: 'G@os:Debian and N@group1'
            group4:
              - 'G@foo:bar'
              - 'or'
              - 'G@foo:baz'

       More information on using nodegroups can be found here.

   Range Cluster Settings
   range_server
       Default: 'range:80'

       The   range   server   (and   optional   port)   that   serves  your  cluster  information
       https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec

          range_server: range:80

   Include Configuration
       Configuration can be loaded from multiple files. The order in which this is done is:

       1. The master config file itself

       2. The files matching the glob in default_include

       3. The files matching the glob in include (if defined)

       Each successive step overrides any values defined in the previous steps.   Therefore,  any
       config  options  defined in one of the default_include files would override the same value
       in the master config file, and any options defined in include would override both.

   default_include
       Default: master.d/*.conf

       The master can include configuration  from  other  files.  Per  default  the  master  will
       automatically  include all config files from master.d/*.conf where master.d is relative to
       the directory of the master configuration file.

       NOTE:
          Salt creates files in the master.d directory for its own use. These files are  prefixed
          with an underscore. A common example of this is the _schedule.conf file.

   include
       Default: not defined

       The  master  can  include  configuration  from other files. To enable this, pass a list of
       paths to this option. The paths can be either relative or absolute; if relative, they  are
       considered  to  be  relative to the directory the main minion configuration file lives in.
       Paths can make use of shell-style globbing. If no files are matched by a  path  passed  to
       this option then the master will log a warning message.

          # Include files from a master.d directory in the same
          # directory as the master config file
          include: master.d/*

          # Include a single extra file into the configuration
          include: /etc/roles/webserver

          # Include several files and the master.d directory
          include:
            - extra_config
            - master.d/*
            - /etc/roles/webserver

   Keepalive Settings
   tcp_keepalive
       Default: True

       The  tcp  keepalive  interval  to  set on TCP ports. This setting can be used to tune Salt
       connectivity issues in messy network environments with misbehaving firewalls.

          tcp_keepalive: True

   tcp_keepalive_cnt
       Default: -1

       Sets the ZeroMQ TCP keepalive count. May be used to tune issues with minion disconnects.

          tcp_keepalive_cnt: -1

   tcp_keepalive_idle
       Default: 300

       Sets ZeroMQ TCP keepalive idle. May be used to tune issues with minion disconnects.

          tcp_keepalive_idle: 300

   tcp_keepalive_intvl
       Default: -1

       Sets ZeroMQ TCP keepalive interval. May be used to tune issues with minion disconnects.

          tcp_keepalive_intvl': -1

   Windows Software Repo Settings
   winrepo_provider
       New in version 2015.8.0.

       Specify the provider to be used for winrepo. Must be either pygit2 or gitpython. If unset,
       then  both  will  be tried in that same order, and the first one with a compatible version
       installed will be the provider that is used.

          winrepo_provider: gitpython

   winrepo_dir
       Changed in version 2015.8.0: Renamed from win_repo to winrepo_dir.

       Default: /srv/salt/win/repo

       Location on the master where the winrepo_remotes are checked out for pre-2015.8.0 minions.
       2015.8.0 and later minions use winrepo_remotes_ng instead.

          winrepo_dir: /srv/salt/win/repo

   winrepo_dir_ng
       New in version 2015.8.0: A new ng repo was added.

       Default: /srv/salt/win/repo-ng

       Location on the master where the winrepo_remotes_ng are checked out for 2015.8.0 and later
       minions.

          winrepo_dir_ng: /srv/salt/win/repo-ng

   winrepo_cachefile
       Changed in version 2015.8.0: Renamed from win_repo_mastercachefile to winrepo_cachefile

       NOTE:
          2015.8.0 and later minions do not use this setting since the cachefile is  now  located
          on the minion.

       Default: winrepo.p

       Path relative to winrepo_dir where the winrepo cache should be created.

          winrepo_cachefile: winrepo.p

   winrepo_remotes
       Changed in version 2015.8.0: Renamed from win_gitrepos to winrepo_remotes.

       Default: ['https://github.com/saltstack/salt-winrepo.git']

       List  of git repositories to checkout and include in the winrepo for pre-2015.8.0 minions.
       2015.8.0 and later minions use winrepo_remotes_ng instead.

          winrepo_remotes:
            - https://github.com/saltstack/salt-winrepo.git

       To specify a specific revision of the repository, prepend a commit ID to the  URL  of  the
       repository:

          winrepo_remotes:
            - '<commit_id> https://github.com/saltstack/salt-winrepo.git'

       Replace <commit_id> with the SHA1 hash of a commit ID. Specifying a commit ID is useful in
       that it allows one to revert back to a previous version in the  event  that  an  error  is
       introduced in the latest revision of the repo.

   winrepo_remotes_ng
       New in version 2015.8.0: A new ng repo was added.

       Default: ['https://github.com/saltstack/salt-winrepo-ng.git']

       List  of  git  repositories  to checkout and include in the winrepo for 2015.8.0 and later
       minions.

          winrepo_remotes_ng:
            - https://github.com/saltstack/salt-winrepo-ng.git

       To specify a specific revision of the repository, prepend a commit ID to the  URL  of  the
       repository:

          winrepo_remotes_ng:
            - '<commit_id> https://github.com/saltstack/salt-winrepo-ng.git'

       Replace <commit_id> with the SHA1 hash of a commit ID. Specifying a commit ID is useful in
       that it allows one to revert back to a previous version in the  event  that  an  error  is
       introduced in the latest revision of the repo.

   winrepo_branch
       New in version 2015.8.0.

       Default: master

       If the branch is omitted from a winrepo remote, then this branch will be used instead. For
       example, in the  configuration  below,  the  first  two  remotes  would  use  the  winrepo
       branch/tag, while the third would use the foo branch/tag.

          winrepo_branch: winrepo

          winrepo_remotes:
            - https://mygitserver/winrepo1.git
            - https://mygitserver/winrepo2.git:
            - foo https://mygitserver/winrepo3.git

   winrepo_ssl_verify
       New in version 2015.8.0.

       Changed in version 2016.11.0.

       Default: False

       Specifies  whether  or  not  to  ignore  SSL certificate errors when contacting the remote
       repository. The  False setting  is  useful  if  you’re  using  a  git  repo  that  uses  a
       self-signed certificate. However, keep in mind that setting this to anything other True is
       a considered insecure, and using an SSH-based transport (if available)  may  be  a  better
       option.

       In the 2016.11.0 release, the default config value changed from False to True.

          winrepo_ssl_verify: True

   Winrepo Authentication Options
       These parameters only currently apply to the pygit2 winrepo_provider. Authentication works
       the same as it does in gitfs, as outlined in the  GitFS  Walkthrough,  though  the  global
       configuration  options  are named differently to reflect that they are for winrepo instead
       of gitfs.

   winrepo_user
       New in version 2015.8.0.

       Default: ''

       Along with winrepo_password, is used to authenticate to HTTPS remotes.

          winrepo_user: git

   winrepo_password
       New in version 2015.8.0.

       Default: ''

       Along with winrepo_user, is used to authenticate to HTTPS remotes. This parameter  is  not
       required if the repository does not use authentication.

          winrepo_password: mypassword

   winrepo_insecure_auth
       New in version 2015.8.0.

       Default: False

       By  default,  Salt  will  not  authenticate  to an HTTP (non-HTTPS) remote. This parameter
       enables authentication over HTTP. Enable this at your own risk.

          winrepo_insecure_auth: True

   winrepo_pubkey
       New in version 2015.8.0.

       Default: ''

       Along with winrepo_privkey (and optionally winrepo_passphrase), is used to authenticate to
       SSH remotes.

          winrepo_pubkey: /path/to/key.pub

   winrepo_privkey
       New in version 2015.8.0.

       Default: ''

       Along  with winrepo_pubkey (and optionally winrepo_passphrase), is used to authenticate to
       SSH remotes.

          winrepo_privkey: /path/to/key

   winrepo_passphrase
       New in version 2015.8.0.

       Default: ''

       This parameter is optional, required only when the SSH key being used to  authenticate  is
       protected by a passphrase.

          winrepo_passphrase: mypassphrase

   winrepo_refspecs
       New in version 2017.7.0.

       Default: ['+refs/heads/*:refs/remotes/origin/*', '+refs/tags/*:refs/tags/*']

       When fetching from remote repositories, by default Salt will fetch branches and tags. This
       parameter can be used to override  the  default  and  specify  alternate  refspecs  to  be
       fetched.  This  parameter  works  similarly  to  its  GitFS counterpart, in that it can be
       configured both globally and for individual remotes.

          winrepo_refspecs:
            - '+refs/heads/*:refs/remotes/origin/*'
            - '+refs/tags/*:refs/tags/*'
            - '+refs/pull/*/head:refs/remotes/origin/pr/*'
            - '+refs/pull/*/merge:refs/remotes/origin/merge/*'

   Configure Master on Windows
       The master on Windows requires no additional configuration.  You  can  modify  the  master
       configuration  by  creating/editing the master config file located at c:\salt\conf\master.
       The same configuration options available on Linux are available in  Windows,  as  long  as
       they  apply.  For example, SSH options wouldn’t apply in Windows. The main differences are
       the file paths. If you are familiar with common salt paths, the  following  table  may  be
       useful:

                                 ┌────────────┬───────┬───────────────┐
                                 │linux Paths │       │ Windows Paths │
                                 ├────────────┼───────┼───────────────┤
                                 │/etc/salt<--->c:\salt\conf  │
                                 ├────────────┼───────┼───────────────┤
                                 │/<--->c:\salt       │
                                 └────────────┴───────┴───────────────┘

       So,  for  example,  the  master  config  file in Linux is /etc/salt/master. In Windows the
       master config file is c:\salt\conf\master. The Linux path /etc/salt  becomes  c:\salt\conf
       in Windows.

   Common File Locations
                   ┌───────────────────────────────┬─────────────────────────────────┐
                   │Linux Paths                    │ Windows Paths                   │
                   ├───────────────────────────────┼─────────────────────────────────┤
                   │conf_file: /etc/salt/masterconf_file: c:\salt\conf\master  │
                   ├───────────────────────────────┼─────────────────────────────────┤
                   │log_file: /var/log/salt/masterlog_file:                       │
                   │                               │ c:\salt\var\log\salt\master     │
                   ├───────────────────────────────┼─────────────────────────────────┤
                   │pidfile:pidfile:                        │
                   │/var/run/salt-master.pidc:\salt\var\run\salt-master.pid │
                   └───────────────────────────────┴─────────────────────────────────┘

   Common Directories
               ┌─────────────────────────────────┬───────────────────────────────────────┐
               │Linux Paths                      │ Windows Paths                         │
               ├─────────────────────────────────┼───────────────────────────────────────┤
               │cachedir: /var/cache/salt/mastercachedir:                             │
               │                                 │ c:\salt\var\cache\salt\master         │
               ├─────────────────────────────────┼───────────────────────────────────────┤
               │extension_modules:c:\salt\var\cache\salt\master\extmods │
               │/var/cache/salt/master/extmods   │                                       │
               ├─────────────────────────────────┼───────────────────────────────────────┤
               │pki_dir: /etc/salt/pki/masterpki_dir: c:\salt\conf\pki\master      │
               ├─────────────────────────────────┼───────────────────────────────────────┤
               │root_dir: /root_dir: c:\salt                     │
               └─────────────────────────────────┴───────────────────────────────────────┘

               │sock_dir: /var/run/salt/mastersock_dir: c:\salt\var\run\salt\master │
               └─────────────────────────────────┴───────────────────────────────────────┘

   Roots
       file_roots

                                 ┌──────────────┬──────────────────────┐
                                 │Linux Paths   │ Windows Paths        │
                                 ├──────────────┼──────────────────────┤
                                 │/srv/saltc:\salt\srv\salt     │
                                 ├──────────────┼──────────────────────┤
                                 │/srv/spm/saltc:\salt\srv\spm\salt │
                                 └──────────────┴──────────────────────┘

       pillar_roots

                               ┌────────────────┬────────────────────────┐
                               │Linux Paths     │ Windows Paths          │
                               ├────────────────┼────────────────────────┤
                               │/srv/pillarc:\salt\srv\pillar     │
                               ├────────────────┼────────────────────────┤
                               │/srv/spm/pillarc:\salt\srv\spm\pillar │
                               └────────────────┴────────────────────────┘

   Win Repo Settings
                    ┌────────────────────────────────┬──────────────────────────────┐
                    │Linux Paths                     │ Windows Paths                │
                    ├────────────────────────────────┼──────────────────────────────┤
                    │winrepo_dir: /srv/salt/win/repowinrepo_dir:                 │
                    │                                │ c:\salt\srv\salt\win\repo    │
                    ├────────────────────────────────┼──────────────────────────────┤
                    │winrepo_dir_ng:winrepo_dir_ng:              │
                    │/srv/salt/win/repo-ngc:\salt\srv\salt\win\repo-ng │
                    └────────────────────────────────┴──────────────────────────────┘

   Configuring the Salt Minion
       The Salt system is amazingly simple and easy to configure. The two components of the  Salt
       system  each  have  a respective configuration file. The salt-master is configured via the
       master configuration file, and the salt-minion is configured via the minion  configuration
       file.

       SEE ALSO:
          example minion configuration file

       The  Salt  Minion configuration is very simple. Typically, the only value that needs to be
       set is the master value so the minion knows where to locate its master.

       By default,  the  salt-minion  configuration  will  be  in  /etc/salt/minion.   A  notable
       exception is FreeBSD, where the configuration will be in /usr/local/etc/salt/minion.

   Minion Primary Configuration
   master
       Default: salt

       The hostname or IP address of the master. See ipv6 for IPv6 connections to the master.

       Default: salt

          master: salt

   master:port Syntax
       New in version 2015.8.0.

       The master config option can also be set to use the master’s IP in conjunction with a port
       number by default.

          master: localhost:1234

       For IPv6 formatting with a port, remember to add brackets around  the  IP  address  before
       adding the port and enclose the line in single quotes to make it a string:

          master: '[2001:db8:85a3:8d3:1319:8a2e:370:7348]:1234'

       NOTE:
          If  a  port  is specified in the master as well as master_port, the master_port setting
          will be overridden by the master configuration.

   List of Masters Syntax
       The option can also be set to a list of masters, enabling multi-master mode.

          master:
            - address1
            - address2

       Changed in version 2014.7.0: The master can be dynamically configured.  The  master  value
       can be set to an module function which will be executed and will assume that the returning
       value is the ip or hostname of the desired master. If a function is being specified,  then
       the  master_type  option  must  be  set  to  func,  to tell the minion that the value is a
       function to be run and not a fully-qualified domain name.

          master: module.function
          master_type: func

       In addition, instead of using multi-master mode, the minion can be configured to  use  the
       list  of  master  addresses as a failover list, trying the first address, then the second,
       etc. until the minion successfully connects. To enable this behavior, set  master_type  to
       failover:

          master:
            - address1
            - address2
          master_type: failover

   ipv6
       Default: None

       Whether  the  master  should  be  connected  over IPv6. By default salt minion will try to
       automatically detect IPv6 connectivity to master.

          ipv6: True

   master_uri_format
       New in version 2015.8.0.

       Specify the format in which the master  address  will  be  evaluated.  Valid  options  are
       default  or  ip_only.  If  ip_only is specified, then the master address will not be split
       into IP and PORT, so be sure that only an IP  (or  domain  name)  is  set  in  the  master
       configuration setting.

          master_uri_format: ip_only

   master_tops_first
       New in version 2018.3.0.

       Default: False

       SLS  targets  defined using the Master Tops system are normally executed after any matches
       defined in the Top File. Set this option to True to have the  minion  execute  the  Master
       Tops states first.

          master_tops_first: True

   master_type
       New in version 2014.7.0.

       Default: str

       The type of the master variable. Can be str, failover, func or disable.

          master_type: failover

       If  this  option is set to failover, master must be a list of master addresses. The minion
       will then try each master in the  order  specified  in  the  list  until  it  successfully
       connects.   master_alive_interval  must  also be set, this determines how often the minion
       will verify the presence of the master.

          master_type: func

       If the master needs to be dynamically assigned by executing a function instead of  reading
       in  the  static  master  value,  set this to func. This can be used to manage the minion’s
       master setting from an execution module. By simply changing the algorithm in the module to
       return a new master ip/fqdn, restart the minion and it will connect to the new master.

       As  of  version  2016.11.0  this  option  can  be set to disable and the minion will never
       attempt to talk to the master. This is useful for running a masterless minion daemon.

          master_type: disable

   max_event_size
       New in version 2014.7.0.

       Default: 1048576

       Passing very large events can cause the minion to consume large amounts  of  memory.  This
       value  tunes the maximum size of a message allowed onto the minion event bus. The value is
       expressed in bytes.

          max_event_size: 1048576

   master_failback
       New in version 2016.3.0.

       Default: False

       If the minion is in multi-master  mode  and  the  :conf_minion`master_type`  configuration
       option  is  set  to  failover, this setting can be set to True to force the minion to fail
       back to the first master in the list if the first master is back online.

          master_failback: False

   master_failback_interval
       New in version 2016.3.0.

       Default: 0

       If the minion is in multi-master mode, the :conf_minion`master_type` configuration is  set
       to  failover,  and the master_failback option is enabled, the master failback interval can
       be set to ping the top master with this interval, in seconds.

          master_failback_interval: 0

   master_alive_interval
       Default: 0

       Configures how often, in seconds, the minion will verify that the current master is  alive
       and  responding.   The minion will try to establish a connection to the next master in the
       list if it finds the existing one is dead.

          master_alive_interval: 30

   master_shuffle
       New in version 2014.7.0.

       Default: False

       If master is a list of addresses and :conf_minion`master_type` is failover,  shuffle  them
       before  trying  to connect to distribute the minions over all available masters. This uses
       Python’s random.shuffle method.

          master_shuffle: True

   random_master
       Default: False

       If master is a list of addresses, and :conf_minion`master_type` is set to failover shuffle
       them  before  trying to connect to distribute the minions over all available masters. This
       uses Python’s random.shuffle method.

          random_master: True

   retry_dns
       Default: 30

       Set the number of seconds to wait before attempting to resolve the master hostname if name
       resolution  fails.  Defaults to 30 seconds.  Set to zero if the minion should shutdown and
       not retry.

          retry_dns: 30

   retry_dns_count
       New in version 2018.3.4.

       Default: None

       Set the number of  attempts  to  perform  when  resolving  the  master  hostname  if  name
       resolution fails.  By default the minion will retry indefinitely.

          retry_dns_count: 3

   master_port
       Default: 4506

       The  port of the master ret server, this needs to coincide with the ret_port option on the
       Salt master.

          master_port: 4506

   publish_port
       Default: 4505

       The port of the master publish server, this needs to coincide with the publish_port option
       on the Salt master.

          publish_port: 4505

   source_interface_name
       New in version 2018.3.0.

       The name of the interface to use when establishing the connection to the Master.

       NOTE:
          If  multiple  IP addresses are configured on the named interface, the first one will be
          selected. In that case, for a  better  selection,  consider  using  the  source_address
          option.

       NOTE:
          To  use an IPv6 address from the named interface, make sure the option ipv6 is enabled,
          i.e., ipv6: true.

       NOTE:
          If the interface is down, it will avoid using it, and the Minion will bind  to  0.0.0.0
          (all interfaces).

       WARNING:
          This  option  requires  modern version of the underlying libraries used by the selected
          transport:

          · zeromq requires pyzmq >= 16.0.1 and libzmq >= 4.1.6

          · tcp requires tornado >= 4.5

       Configuration example:

          source_interface_name: bond0.1234

   source_address
       New in version 2018.3.0.

       The source IP address or the domain name to be used when  connecting  the  Minion  to  the
       Master.  See ipv6 for IPv6 connections to the Master.

       WARNING:
          This  option  requires  modern version of the underlying libraries used by the selected
          transport:

          · zeromq requires pyzmq >= 16.0.1 and libzmq >= 4.1.6

          · tcp requires tornado >= 4.5

       Configuration example:

          source_address: if-bond0-1234.sjc.us-west.internal

   source_ret_port
       New in version 2018.3.0.

       The source port to be used when connecting the Minion to the Master ret server.

       WARNING:
          This option requires modern version of the underlying libraries used  by  the  selected
          transport:

          · zeromq requires pyzmq >= 16.0.1 and libzmq >= 4.1.6

          · tcp requires tornado >= 4.5

       Configuration example:

          source_ret_port: 49017

   source_publish_port
       New in version 2018.3.0.

       The source port to be used when connecting the Minion to the Master publish server.

       WARNING:
          This  option  requires  modern version of the underlying libraries used by the selected
          transport:

          · zeromq requires pyzmq >= 16.0.1 and libzmq >= 4.1.6

          · tcp requires tornado >= 4.5

       Configuration example:

          source_publish_port: 49018

   user
       Default: root

       The user to run the Salt processes

          user: root

   sudo_user
       Default: ''

       The user to run salt remote execution commands as via sudo. If this option is enabled then
       sudo  will  be used to change the active user executing the remote command. If enabled the
       user will need to be allowed access via the sudoers file for the user that the salt minion
       is  configured  to  run  as. The most common option would be to use the root user. If this
       option is set the user option should also be set to a non-root user. If migrating  from  a
       root  minion  to  a  non root minion the minion cache should be cleared and the minion pki
       directory will need to be changed to the ownership of the new user.

          sudo_user: root

   pidfile
       Default: /var/run/salt-minion.pid

       The location of the daemon’s process ID file

          pidfile: /var/run/salt-minion.pid

   root_dir
       Default: /

       This directory is  prepended  to  the  following  options:  pki_dir,  cachedir,  log_file,
       sock_dir, and pidfile.

          root_dir: /

   conf_file
       Default: /etc/salt/minion

       The path to the minion’s configuration file.

          conf_file: /etc/salt/minion

   pki_dir
       Default: /etc/salt/pki/minion

       The directory used to store the minion’s public and private keys.

          pki_dir: /etc/salt/pki/minion

   id
       Default: the system’s hostname

       SEE ALSO:
          Salt Walkthrough

          The  Setting up a Salt Minion section contains detailed information on how the hostname
          is determined.

       Explicitly declare the id for this minion to use. Since  Salt  uses  detached  ids  it  is
       possible to run multiple minions on the same machine but with different ids.

          id: foo.bar.com

   minion_id_caching
       New in version 0.17.2.

       Default: True

       Caches  the  minion  id  to  a  file when the minion’s id is not statically defined in the
       minion  config.  This  setting  prevents  potential  problems  when  automatic  minion  id
       resolution changes, which can cause the minion to lose connection with the master. To turn
       off minion id caching, set this config to False.

       For more information, please see Issue #7558 and Pull Request #8488.

          minion_id_caching: True

   append_domain
       Default: None

       Append a domain to a hostname in the event that it does not  exist.  This  is  useful  for
       systems where socket.getfqdn() does not actually result in a FQDN (for instance, Solaris).

          append_domain: foo.org

   minion_id_lowercase
       Default: False

       Convert minion id to lowercase when it is being generated. Helpful when some hosts get the
       minion id in uppercase. Cached ids will remain the same and not converted.

          minion_id_lowercase: True

   cachedir
       Default: /var/cache/salt/minion

       The location for minion cache data.

       This directory may contain sensitive data and should be protected accordingly.

          cachedir: /var/cache/salt/minion

   color_theme
       Default: ""

       Specifies a path to the color theme to use for colored command line output.

          color_theme: /etc/salt/color_theme

   append_minionid_config_dirs
       Default: [] (the empty list) for regular minions, ['cachedir'] for proxy minions.

       Append minion_id to these configuration directories.   Helps  with  multiple  proxies  and
       minions  running  on  the  same  machine. Allowed elements in the list: pki_dir, cachedir,
       extension_modules.  Normally not needed unless running several proxies and/or  minions  on
       the same machine.

          append_minionid_config_dirs:
            - pki_dir
            - cachedir

   verify_env
       Default: True

       Verify and set permissions on configuration directories at startup.

          verify_env: True

       NOTE:
          When  set  to  True  the  verify_env  option requires WRITE access to the configuration
          directory (/etc/salt/). In certain situations such as mounting /etc/salt/ as  read-only
          for templating this will create a stack trace when state.apply is called.

   cache_jobs
       Default: False

       The  minion can locally cache the return data from jobs sent to it, this can be a good way
       to keep track of the minion side of the jobs the minion  has  executed.  By  default  this
       feature is disabled, to enable set cache_jobs to True.

          cache_jobs: False

   grains
       Default: (empty)

       SEE ALSO:
          static-custom-grains

       Statically assigns grains to the minion.

          grains:
            roles:
              - webserver
              - memcache
            deployment: datacenter4
            cabinet: 13
            cab_u: 14-15

   grains_cache
       Default: False

       The minion can locally cache grain data instead of refreshing the data each time the grain
       is referenced. By default this feature is disabled, to enable set grains_cache to True.

          grains_cache: False

   grains_deep_merge
       New in version 2016.3.0.

       Default: False

       The grains can be merged, instead of overridden, using this option.   This  allows  custom
       grains  to  defined  different subvalues of a dictionary grain. By default this feature is
       disabled, to enable set grains_deep_merge to True.

          grains_deep_merge: False

       For example, with these custom grains functions:

          def custom1_k1():
              return {'custom1': {'k1': 'v1'}}

          def custom1_k2():
              return {'custom1': {'k2': 'v2'}}

       Without grains_deep_merge, the result would be:

          custom1:
            k1: v1

       With grains_deep_merge, the result will be:

          custom1:
            k1: v1
            k2: v2

   grains_refresh_every
       Default: 0

       The grains_refresh_every setting allows for a minion to periodically check its  grains  to
       see  if  they  have  changed  and,  if  so,  to  inform the master of the new grains. This
       operation is moderately expensive, therefore care should be taken not to  set  this  value
       too low.

       Note: This value is expressed in minutes.

       A value of 10 minutes is a reasonable default.

          grains_refresh_every: 0

   fibre_channel_grains
       Default: False

       The  fibre_channel_grains  setting will enable the fc_wwn grain for Fibre Channel WWN’s on
       the minion. Since this grain is expensive, it is disabled by default.

          fibre_channel_grains: True

   iscsi_grains
       Default: False

       The iscsi_grains setting will enable the iscsi_iqn grain on the minion. Since  this  grain
       is expensive, it is disabled by default.

          iscsi_grains: True

   mine_enabled
       New in version 2015.8.10.

       Default: True

       Determines  whether  or not the salt minion should run scheduled mine updates.  If this is
       set to False then the mine update function will not get added to  the  scheduler  for  the
       minion.

          mine_enabled: True

   mine_return_job
       New in version 2015.8.10.

       Default: False

       Determines whether or not scheduled mine updates should be accompanied by a job return for
       the job cache.

          mine_return_job: False

   mine_functions
       Default: Empty

       Designate which functions should be executed at mine_interval intervals  on  each  minion.
       See  this  documentation on the Salt Mine for more information.  Note these can be defined
       in the pillar for a minion as well.
          example minion configuration file

          mine_functions:
            test.ping: []
            network.ip_addrs:
              interface: eth0
              cidr: '10.0.0.0/8'

   mine_interval
       Default: 60

       The number of minutes between mine updates.

          mine_interval: 60

   sock_dir
       Default: /var/run/salt/minion

       The directory where Unix sockets will be kept.

          sock_dir: /var/run/salt/minion

   enable_gpu_grains
       Default: True

       Enable GPU hardware data for your master. Be aware that the minion can  take  a  while  to
       start  up  when  lspci  and/or dmidecode is used to populate the grains for the minion, so
       this can be set to False if you do not need these grains.

          enable_gpu_grains: False

   outputter_dirs
       Default: []

       A list of additional directories to search for salt outputters in.

          outputter_dirs: []

   backup_mode
       Default: ''

       Make backups of files replaced  by  file.managed  and  file.recurse  state  modules  under
       cachedir  in  file_backup  subdirectory  preserving  original  paths.  Refer to File State
       Backups documentation for more details.

          backup_mode: minion

   acceptance_wait_time
       Default: 10

       The number of seconds to wait until attempting to re-authenticate with the master.

          acceptance_wait_time: 10

   acceptance_wait_time_max
       Default: 0

       The maximum number of seconds to wait until attempting to re-authenticate with the master.
       If set, the wait will increase by acceptance_wait_time seconds each iteration.

          acceptance_wait_time_max: 0

   rejected_retry
       Default: False

       If  the  master  rejects the minion’s public key, retry instead of exiting.  Rejected keys
       will be handled the same as waiting on acceptance.

          rejected_retry: False

   random_reauth_delay
       Default: 10

       When the master key changes, the minion will try to re-auth  itself  to  receive  the  new
       master  key.  In  larger environments this can cause a syn-flood on the master because all
       minions try to re-auth immediately. To prevent this and have a minion wait  for  a  random
       amount  of  time,  use  this  optional parameter. The wait-time will be a random number of
       seconds between 0 and the defined value.

          random_reauth_delay: 60

   master_tries
       New in version 2016.3.0.

       Default: 1

       The number of attempts to connect to a master  before  giving  up.  Set  this  to  -1  for
       unlimited  attempts. This allows for a master to have downtime and the minion to reconnect
       to it later when it comes back up. In ‘failover’ mode, which is  set  in  the  master_type
       configuration, this value is the number of attempts for each set of masters. In this mode,
       it will cycle through the list of masters for each attempt.

       master_tries is different than  auth_tries  because  auth_tries  attempts  to  retry  auth
       attempts  with a single master. auth_tries is under the assumption that you can connect to
       the master but not gain authorization from it.  master_tries will still cycle through  all
       of the masters in a given try, so it is appropriate if you expect occasional downtime from
       the master(s).

          master_tries: 1

   auth_tries
       New in version 2014.7.0.

       Default: 7

       The number of attempts to authenticate to a master before giving up. Or, more technically,
       the  number  of  consecutive  SaltReqTimeoutErrors  that  are  acceptable  when  trying to
       authenticate to the master.

          auth_tries: 7

   auth_timeout
       New in version 2014.7.0.

       Default: 60

       When waiting for a master to accept  the  minion’s  public  key,  salt  will  continuously
       attempt  to  reconnect  until  successful. This is the timeout value, in seconds, for each
       individual  attempt.  After   this   timeout   expires,   the   minion   will   wait   for
       acceptance_wait_time  seconds  before trying again.  Unless your master is under unusually
       heavy load, this should be left at the default.

          auth_timeout: 60

   auth_safemode
       New in version 2014.7.0.

       Default: False

       If authentication fails due to SaltReqTimeoutError during a ping_interval,  this  setting,
       when set to True, will cause a sub-minion process to restart.

          auth_safemode: False

   ping_interval
       Default: 0

       Instructs  the minion to ping its master(s) every n number of minutes. Used primarily as a
       mitigation technique against minion disconnects.

          ping_interval: 0

   random_startup_delay
       Default: 0

       The maximum bound for an interval in which a minion will randomly sleep upon  starting  up
       prior  to attempting to connect to a master. This can be used to splay connection attempts
       for cases where many minions starting up at once may place undue load on a master.

       For example, setting this to 5 will tell a minion to sleep for a value  between  0  and  5
       seconds.

          random_startup_delay: 5

   recon_default
       Default: 1000

       The interval in milliseconds that the socket should wait before trying to reconnect to the
       master (1000ms = 1 second).

          recon_default: 1000

   recon_max
       Default: 10000

       The maximum time a socket should wait. Each interval the time to  wait  is  calculated  by
       doubling the previous time. If recon_max is reached, it starts again at the recon_default.

       Short example:

              · reconnect 1: the socket will wait ‘recon_default’ milliseconds

              · reconnect 2: ‘recon_default’ * 2

              · reconnect 3: (‘recon_default’ * 2) * 2

              · reconnect 4: value from previous interval * 2

              · reconnect 5: value from previous interval * 2

              · reconnect x: if value >= recon_max, it starts again with recon_default

          recon_max: 10000

   recon_randomize
       Default: True

       Generate  a random wait time on minion start. The wait time will be a random value between
       recon_default and recon_default + recon_max. Having all minions reconnect  with  the  same
       recon_default  and  recon_max  value  kind  of defeats the purpose of being able to change
       these settings. If all minions have the same values and the setup is quite large  (several
       thousand  minions),  they  will  still  flood  the master. The desired behavior is to have
       time-frame within all minions try to reconnect.

          recon_randomize: True

   loop_interval
       Default: 1

       The loop_interval sets how long in seconds the minion will  wait  between  evaluating  the
       scheduler and running cleanup tasks. This defaults to 1 second on the minion scheduler.

          loop_interval: 1

   pub_ret
       Default: True

       Some  installations  choose  to  start  all job returns in a cache or a returner and forgo
       sending the results back to a master. In this workflow, jobs are most often executed  with
       –async  from  the  Salt  CLI and then results are evaluated by examining job caches on the
       minions or any configured returners.  WARNING: Setting this to False will disable  returns
       back to the master.

          pub_ret: True

   return_retry_timer
       Default: 5

       The default timeout for a minion return attempt.

          return_retry_timer: 5

   return_retry_timer_max
       Default: 10

       The  maximum  timeout  for  a  minion  return attempt. If non-zero the minion return retry
       timeout will be a random int between return_retry_timer and return_retry_timer_max

          return_retry_timer_max: 10

   cache_sreqs
       Default: True

       The connection to the master ret_port is kept open. When set to False, the minion  creates
       a new connection for every return to the master.

          cache_sreqs: True

   ipc_mode
       Default: ipc

       Windows  platforms  lack  POSIX  IPC  and  must  rely  on  slower TCP based inter- process
       communications. Set ipc_mode to tcp on such systems.

          ipc_mode: ipc

   tcp_pub_port
       Default: 4510

       Publish port used when ipc_mode is set to tcp.

          tcp_pub_port: 4510

   tcp_pull_port
       Default: 4511

       Pull port used when ipc_mode is set to tcp.

          tcp_pull_port: 4511

   transport
       Default: zeromq

       Changes the  underlying  transport  layer.  ZeroMQ  is  the  recommended  transport  while
       additional  transport  layers  are  under  development.  Supported values are zeromq, raet
       (experimental),  and  tcp  (experimental).  This  setting  has  a  significant  impact  on
       performance and should not be changed unless you know what you are doing!

          transport: zeromq

   syndic_finger
       Default: ''

       The  key  fingerprint of the higher-level master for the syndic to verify it is talking to
       the intended master.

          syndic_finger: 'ab:30:65:2a:d6:9e:20:4f:d8:b2:f3:a7:d4:65:50:10'

   proxy_host
       Default: ''

       The hostname used for HTTP proxy access.

          proxy_host: proxy.my-domain

   proxy_port
       Default: 0

       The port number used for HTTP proxy access.

          proxy_port: 31337

   proxy_username
       Default: ''

       The username used for HTTP proxy access.

          proxy_username: charon

   proxy_password
       Default: ''

       The password used for HTTP proxy access.

          proxy_password: obolus

   Docker Configuration
   docker.update_mine
       New in version 2017.7.8,2018.3.3.

       Changed in version Fluorine: The default value is now False

       Default: True

       If enabled, when containers are added, removed, stopped, started, etc., the mine  will  be
       updated  with  the results of docker.ps verbose=True all=True host=True. This mine data is
       used by mine.get_docker. Set this option to False to keep Salt from updating the mine with
       this information.

       NOTE:
          This option can also be set in Grains or Pillar data, with Grains overriding Pillar and
          the minion config file overriding Grains.

       NOTE:
          Disabling this will of course keep mine.get_docker from returning any information for a
          given minion.

          docker.update_mine: False

   docker.compare_container_networks
       New in version 2018.3.0.

       Default:   {'static':   ['Aliases',  'Links',  'IPAMConfig'],  'automatic':  ['IPAddress',
       'Gateway', 'GlobalIPv6Address', 'IPv6Gateway']}

       Specifies which keys are examined by docker.compare_container_networks.

       NOTE:
          This should not need to be modified unless new features added to Docker result  in  new
          keys  added  to  the  network  configuration which must be compared to determine if two
          containers have different network configs.  This config option exists solely as  a  way
          to  allow  users to continue using Salt to manage their containers after an API change,
          without waiting for a new Salt release to catch up to the changes in the Docker API.

          docker.compare_container_networks:
            static:
              - Aliases
              - Links
              - IPAMConfig
            automatic:
              - IPAddress
              - Gateway
              - GlobalIPv6Address
              - IPv6Gateway

   optimization_order
       Default: [0, 1, 2]

       In cases where Salt is distributed without .py files, this option determines the  priority
       of optimization level(s) Salt’s module loader should prefer.

       NOTE:
          This option is only supported on Python 3.5+.

          optimization_order:
            - 2
            - 0
            - 1

   Minion Execution Module Management
   disable_modules
       Default: [] (all execution modules are enabled by default)

       The event may occur in which the administrator desires that a minion should not be able to
       execute a certain module.

       However, the sys module is built into the minion and cannot be disabled.

       This setting can also tune the minion. Because all modules are loaded into system  memory,
       disabling modules will lower the minion’s memory footprint.

       Modules  should  be  specified according to their file name on the system and not by their
       virtual name. For example, to disable cmd, use the  string  cmdmod  which  corresponds  to
       salt.modules.cmdmod.

          disable_modules:
            - test
            - solr

   disable_returners
       Default: [] (all returners are enabled by default)

       If certain returners should be disabled, this is the place

          disable_returners:
            - mongo_return

   whitelist_modules
       Default:  []  (Module whitelisting is disabled.  Adding anything to the config option will
       cause only the listed modules to be enabled.  Modules not in the list will not be loaded.)

       This option is the reverse of disable_modules. If enabled, only execution modules in  this
       list will be loaded and executed on the minion.

       Note  that  this  is  a very large hammer and it can be quite difficult to keep the minion
       working the way you think it should since Salt uses many modules internally itself.  At  a
       bare minimum you need the following enabled or else the minion won’t start.

          whitelist_modules:
            - cmdmod
            - test
            - config

   module_dirs
       Default: []

       A list of extra directories to search for Salt modules

          module_dirs:
            - /var/lib/salt/modules

   returner_dirs
       Default: []

       A list of extra directories to search for Salt returners

          returner_dirs:
            - /var/lib/salt/returners

   states_dirs
       Default: []

       A list of extra directories to search for Salt states

          states_dirs:
            - /var/lib/salt/states

   grains_dirs
       Default: []

       A list of extra directories to search for Salt grains

          grains_dirs:
            - /var/lib/salt/grains

   render_dirs
       Default: []

       A list of extra directories to search for Salt renderers

          render_dirs:
            - /var/lib/salt/renderers

   utils_dirs
       Default: []

       A list of extra directories to search for Salt utilities

          utils_dirs:
            - /var/lib/salt/utils

   cython_enable
       Default: False

       Set  this value to true to enable auto-loading and compiling of .pyx modules, This setting
       requires that gcc and cython are installed on the minion.

          cython_enable: False

   enable_zip_modules
       New in version 2015.8.0.

       Default: False

       Set this value to true to enable loading of  zip  archives  as  extension  modules.   This
       allows for packing module code with specific dependencies to avoid conflicts and/or having
       to install specific modules’ dependencies in system libraries.

          enable_zip_modules: False

   providers
       Default: (empty)

       A module provider can be statically  overwritten  or  extended  for  the  minion  via  the
       providers option. This can be done on an individual basis in an SLS file, or globally here
       in the minion config, like below.

          providers:
            service: systemd

   modules_max_memory
       Default: -1

       Specify a max size (in bytes) for modules  on  import.  This  feature  is  currently  only
       supported on *NIX operating systems and requires psutil.

          modules_max_memory: -1

   extmod_whitelist/extmod_blacklist
       New in version 2017.7.0.

       By  using  this dictionary, the modules that are synced to the minion’s extmod cache using
       saltutil.sync_* can be limited.  If nothing is set to a specific type,  then  all  modules
       are accepted.  To block all modules of a specific type, whitelist an empty list.

          extmod_whitelist:
            modules:
              - custom_module
            engines:
              - custom_engine
            pillars: []

          extmod_blacklist:
            modules:
              - specific_module

       Valid options:

          · beacons

          · clouds

          · sdb

          · modules

          · states

          · grains

          · renderers

          · returners

          · proxy

          · engines

          · output

          · utils

          · pillar

   Top File Settings
       These parameters only have an effect if running a masterless minion.

   state_top
       Default: top.sls

       The  state  system  uses a “top” file to tell the minions what environment to use and what
       modules to use.  The  state_top  file  is  defined  relative  to  the  root  of  the  base
       environment.

          state_top: top.sls

   state_top_saltenv
       This  option  has  no default value. Set it to an environment name to ensure that only the
       top file from that environment is considered during a highstate.

       NOTE:
          Using  this  value  does  not  change  the   merging   strategy.   For   instance,   if
          top_file_merging_strategy  is  set  to merge, and state_top_saltenv is set to foo, then
          any sections for environments other than foo in the top file for  the  foo  environment
          will  be  ignored. With state_top_saltenv set to base, all states from all environments
          in the base top file will be applied, while all other top files are ignored.  The  only
          way  to  set  state_top_saltenv  to  something  other  than base and not have the other
          environments   in   the   targeted   top   file    ignored,    would    be    to    set
          top_file_merging_strategy to merge_all.

          state_top_saltenv: dev

   top_file_merging_strategy
       Changed in version 2016.11.0: A merge_all strategy has been added.

       Default: merge

       When  no  specific  fileserver  environment  (a.k.a.  saltenv)  has  been  specified for a
       highstate, all environments’ top files are inspected. This config  option  determines  how
       the SLS targets in those top files are handled.

       When  set  to  merge,  the base environment’s top file is evaluated first, followed by the
       other environments’ top files.  The  first  target  expression  (e.g.  '*')  for  a  given
       environment  is  kept, and when the same target expression is used in a different top file
       evaluated later, it is ignored.  Because base is evaluated first, it is authoritative. For
       example,  if  there  is  a target for '*' for the foo environment in both the base and foo
       environment’s top files, the one in the foo environment would be ignored. The environments
       will be evaluated in no specific order (aside from base coming first). For greater control
       over the order in which the environments are evaluated, use env_order.  Note  that,  aside
       from the base environment’s top file, any sections in top files that do not match that top
       file’s environment will be ignored. So, for example, a  section  for  the  qa  environment
       would  be  ignored if it appears in the dev environment’s top file. To keep use cases like
       this from being ignored, use the merge_all strategy.

       When set to same,  then  for  each  environment,  only  that  environment’s  top  file  is
       processed, with the others being ignored. For example, only the dev environment’s top file
       will be processed for the dev environment, and any SLS targets defined for dev in the base
       environment’s  (or  any  other  environment’s) top file will be ignored. If an environment
       does not have a top file, then the top file from the default_top config parameter will  be
       used as a fallback.

       When  set  to  merge_all,  then  all  states  in all environments in all top files will be
       applied. The order in which individual SLS files will be executed will depend on the order
       in  which  the  top  files  were  evaluated,  and the environments will be evaluated in no
       specific order. For  greater  control  over  the  order  in  which  the  environments  are
       evaluated, use env_order.

          top_file_merging_strategy: same

   env_order
       Default: []

       When  top_file_merging_strategy  is  set  to  merge, and no environment is specified for a
       highstate, this config option allows for the order in which top files are evaluated to  be
       explicitly defined.

          env_order:
            - base
            - dev
            - qa

   default_top
       Default: base

       When  top_file_merging_strategy  is  set  to  same,  and no environment is specified for a
       highstate (i.e.  environment is not set for the minion), this config  option  specifies  a
       fallback environment in which to look for a top file if an environment lacks one.

          default_top: dev

   startup_states
       Default: ''

       States to run when the minion daemon starts. To enable, set startup_states to:

       · highstate: Execute state.highstate

       · sls: Read in the sls_list option and execute the named sls files

       · top: Read top_file option and execute based on that file on the Master

          startup_states: ''

   sls_list
       Default: []

       List of states to run when the minion starts up if startup_states is set to sls.

          sls_list:
            - edit.vim
            - hyper

   top_file
       Default: ''

       Top file to execute if startup_states is set to top.

          top_file: ''

   State Management Settings
   renderer
       Default: yaml_jinja

       The default renderer used for local state executions

          renderer: yaml_jinja

   test
       Default: False

       Set  all  state calls to only test if they are going to actually make changes or just post
       what changes are going to be made.

          test: False

   state_verbose
       Default: True

       Controls the verbosity of state runs. By default, the results of all states are  returned,
       but  setting  this  value  to False will cause salt to only display output for states that
       failed or states that have changes.

          state_verbose: True

   state_output
       Default: full

       The state_output setting controls which results will be output full multi line:

       · full, terse - each state will be full/terse

       · mixed - only states with errors will be full

       · changes - states with changes and errors will be full

       full_id, mixed_id, changes_id and terse_id are also allowed; when set, the state  ID  will
       be used as name in the output.

          state_output: full

   state_output_diff
       Default: False

       The  state_output_diff setting changes whether or not the output from successful states is
       returned. Useful when even the terse output of these states is cluttering the logs. Set it
       to True to ignore them.

          state_output_diff: False

   autoload_dynamic_modules
       Default: True

       autoload_dynamic_modules  turns  on automatic loading of modules found in the environments
       on the master. This is turned on by default. To turn off auto-loading modules when  states
       run, set this value to False.

          autoload_dynamic_modules: True

       Default: True

       clean_dynamic_modules  keeps  the  dynamic  modules on the minion in sync with the dynamic
       modules on the master. This means that if a dynamic module is not on the master it will be
       deleted  from  the minion. By default this is enabled and can be disabled by changing this
       value to False.

          clean_dynamic_modules: True

       NOTE:
          If extmod_whitelist is specified, modules  which  are  not  whitelisted  will  also  be
          cleaned here.

   saltenv
       Changed  in version 2018.3.0: Renamed from environment to saltenv. If environment is used,
       saltenv will take its value. If both are used, environment will  be  ignored  and  saltenv
       will be used.

       Normally  the  minion is not isolated to any single environment on the master when running
       states, but the environment can be isolated on the minion side by statically  setting  it.
       Remember that the recommended way to manage environments is to isolate via the top file.

          saltenv: dev

   lock_saltenv
       New in version 2018.3.0.

       Default: False

       For  purposes  of  running  states,  this  option  prevents  using the saltenv argument to
       manually set the environment. This is useful to keep a minion which has the saltenv option
       set to dev from running states from an environment other than dev.

          lock_saltenv: True

   snapper_states
       Default: False

       The  snapper_states value is used to enable taking snapper snapshots before and after salt
       state runs. This allows for state runs to be rolled back.

       For snapper states to function properly snapper needs to be installed and enabled.

          snapper_states: True

   snapper_states_config
       Default: root

       Snapper can execute based on a snapper configuration. The configuration needs to be set up
       before  snapper  can use it. The default configuration is root, this default makes snapper
       run on SUSE systems using the default configuration set up at install time.

          snapper_states_config: root

   File Directory Settings
   file_client
       Default: remote

       The client defaults to looking on the master server for files, but can be directed to look
       on the minion by setting this parameter to local.

          file_client: remote

   use_master_when_local
       Default: False

       When using a local file_client, this parameter is used to allow the client to connect to a
       master for remote execution.

          use_master_when_local: False

   file_roots
       Default:

          base:
            - /srv/salt

       When using a  local  file_client,  this  parameter  is  used  to  setup  the  fileserver’s
       environments.  This  parameter  operates identically to the master config parameter of the
       same name.

          file_roots:
            base:
              - /srv/salt
            dev:
              - /srv/salt/dev/services
              - /srv/salt/dev/states
            prod:
              - /srv/salt/prod/services
              - /srv/salt/prod/states

   fileserver_followsymlinks
       New in version 2014.1.0.

       Default: True

       By default, the file_server follows symlinks when walking the filesystem tree.   Currently
       this only applies to the default roots fileserver_backend.

          fileserver_followsymlinks: True

   fileserver_ignoresymlinks
       New in version 2014.1.0.

       Default: False

       If  you  do  not  want  symlinks  to  be  treated  as  the files they are pointing to, set
       fileserver_ignoresymlinks to True. By default this is set to False. When set to True,  any
       detected symlink while listing files on the Master will not be returned to the Minion.

          fileserver_ignoresymlinks: False

   fileserver_limit_traversal
       New in version 2014.1.0.

       Default: False

       By default, the Salt fileserver recurses fully into all defined environments to attempt to
       find files. To limit this behavior so that the fileserver only traverses directories  with
       SLS  files  and  special Salt directories like _modules, set fileserver_limit_traversal to
       True. This might be useful for installations where a file root has a very large number  of
       files and performance is impacted.

          fileserver_limit_traversal: False

   hash_type
       Default: sha256

       The  hash_type  is  the  hash  to  use  when  discovering  the hash of a file on the local
       fileserver. The default is sha256, but md5, sha1, sha224,  sha384,  and  sha512  are  also
       supported.

          hash_type: sha256

   Pillar Configuration
   pillar_roots
       Default:

          base:
            - /srv/pillar

       When using a local file_client, this parameter is used to setup the pillar environments.

          pillar_roots:
            base:
              - /srv/pillar
            dev:
              - /srv/pillar/dev
            prod:
              - /srv/pillar/prod

   on_demand_ext_pillar
       New in version 2016.3.6,2016.11.3,2017.7.0.

       Default: ['libvirt', 'virtkey']

       When  using a local file_client, this option controls which external pillars are permitted
       to be used on-demand using pillar.ext.

          on_demand_ext_pillar:
            - libvirt
            - virtkey
            - git

       WARNING:
          This will allow a masterless minion to request specific pillar data via pillar.ext, and
          may  be considered a security risk. However, pillar data generated in this way will not
          affect the in-memory pillar data, so  this  risk  is  limited  to  instances  in  which
          states/modules/etc. (built-in or custom) rely upon pillar data generated by pillar.ext.

   decrypt_pillar
       New in version 2017.7.0.

       Default: []

       A list of paths to be recursively decrypted during pillar compilation.

          decrypt_pillar:
            - 'foo:bar': gpg
            - 'lorem:ipsum:dolor'

       Entries  in  this list can be formatted either as a simple string, or as a key/value pair,
       with the key being the pillar location, and the value being the renderer to use for pillar
       decryption.  If  the former is used, the renderer specified by decrypt_pillar_default will
       be used.

   decrypt_pillar_delimiter
       New in version 2017.7.0.

       Default: :

       The delimiter used to distinguish nested data structures in the decrypt_pillar option.

          decrypt_pillar_delimiter: '|'
          decrypt_pillar:
            - 'foo|bar': gpg
            - 'lorem|ipsum|dolor'

   decrypt_pillar_default
       New in version 2017.7.0.

       Default: gpg

       The default renderer used for decryption, if one is not specified for a given  pillar  key
       in decrypt_pillar.

          decrypt_pillar_default: my_custom_renderer

   decrypt_pillar_renderers
       New in version 2017.7.0.

       Default: ['gpg']

       List of renderers which are permitted to be used for pillar decryption.

          decrypt_pillar_renderers:
            - gpg
            - my_custom_renderer

   pillarenv
       Default: None

       Isolates  the  pillar  environment  on  the  minion  side.  This functions the same as the
       environment setting, but for pillar instead of states.

          pillarenv: dev

   pillarenv_from_saltenv
       New in version 2017.7.0.

       Default: False

       When set to True, the pillarenv value will assume the value of the effective saltenv  when
       running  states. This essentially makes salt '*' state.sls mysls saltenv=dev equivalent to
       salt '*' state.sls mysls saltenv=dev pillarenv=dev. If pillarenv is  set,  either  in  the
       minion config file or via the CLI, it will override this option.

          pillarenv_from_saltenv: True

   pillar_raise_on_missing
       New in version 2015.5.0.

       Default: False

       Set this option to True to force a KeyError to be raised whenever an attempt to retrieve a
       named value from pillar fails. When this option  is  set  to  False,  the  failed  attempt
       returns an empty string.

   minion_pillar_cache
       New in version 2016.3.0.

       Default: False

       The  minion  can  locally  cache rendered pillar data under cachedir/pillar. This allows a
       temporarily disconnected minion to  access  previously  cached  pillar  data  by  invoking
       salt-call  with  the  –local and –pillar_root=:conf_minion:cachedir/pillar options. Before
       enabling this setting consider that the rendered pillar  may  contain  security  sensitive
       data.   Appropriate  access  restrictions  should be in place. By default the saved pillar
       data will be readable only by the user account running salt. By default  this  feature  is
       disabled, to enable set minion_pillar_cache to True.

          minion_pillar_cache: False

   file_recv_max_size
       New in version 2014.7.0.

       Default: 100

       Set  a  hard-limit  on the size of the files that can be pushed to the master.  It will be
       interpreted as megabytes.

          file_recv_max_size: 100

   pass_to_ext_pillars
       Specify a list of configuration keys whose values are to  be  passed  to  external  pillar
       functions.

       Suboptions can be specified using the ‘:’ notation (i.e. option:suboption)

       The  values  are  merged  and  included in the extra_minion_data optional parameter of the
       external pillar function.  The extra_minion_data parameter is passed only to the  external
       pillar functions that have it explicitly specified in their definition.

       If the config contains

          opt1: value1
          opt2:
            subopt1: value2
            subopt2: value3

          pass_to_ext_pillars:
            - opt1
            - opt2: subopt1

       the extra_minion_data parameter will be

          {'opt1': 'value1',
           'opt2': {'subopt1': 'value2'}}

   Security Settings
   open_mode
       Default: False

       Open mode can be used to clean out the PKI key received from the Salt master, turn on open
       mode, restart the minion, then turn off open mode and restart  the  minion  to  clean  the
       keys.

          open_mode: False

   master_finger
       Default: ''

       Fingerprint  of  the master public key to validate the identity of your Salt master before
       the initial key exchange. The master fingerprint can be  found  by  running  “salt-key  -F
       master” on the Salt master.

          master_finger: 'ba:30:65:2a:d6:9e:20:4f:d8:b2:f3:a7:d4:65:11:13'

   keysize
       Default: 2048

       The size of key that should be generated when creating new keys.

          keysize: 2048

   permissive_pki_access
       Default: False

       Enable  permissive access to the salt keys. This allows you to run the master or minion as
       root, but have a non-root group be given access  to  your  pki_dir.  To  make  the  access
       explicit,  root must belong to the group you’ve given access to. This is potentially quite
       insecure.

          permissive_pki_access: False

   verify_master_pubkey_sign
       Default: False

       Enables  verification  of  the  master-public-signature  returned   by   the   master   in
       auth-replies.  Please  see  the tutorial on how to configure this properly Multimaster-PKI
       with Failover Tutorial

       New in version 2014.7.0.

          verify_master_pubkey_sign: True

       If this is set to True, master_sign_pubkey  must  be  also  set  to  True  in  the  master
       configuration file.

   master_sign_key_name
       Default: master_sign

       The  filename  without the .pub suffix of the public key that should be used for verifying
       the signature from the master. The file must be located in the minion’s pki directory.

       New in version 2014.7.0.

          master_sign_key_name: <filename_without_suffix>

   autosign_grains
       New in version 2018.3.0.

       Default: not defined

       The grains that should be sent to the master on authentication to decide if  the  minion’s
       key should be accepted automatically.

       Please see the Autoaccept Minions from Grains documentation for more information.

          autosign_grains:
            - uuid
            - server_id

   always_verify_signature
       Default: False

       If  verify_master_pubkey_sign is enabled, the signature is only verified if the public-key
       of the master changes. If the signature should always be verified,  this  can  be  set  to
       True.

       New in version 2014.7.0.

          always_verify_signature: True

   cmd_blacklist_glob
       Default: []

       If  cmd_blacklist_glob  is  enabled then any shell command called over remote execution or
       via salt-call will be checked against the glob matches  found  in  the  cmd_blacklist_glob
       list and any matched shell command will be blocked.

       NOTE:
          This  blacklist  is  only  applied  to direct executions made by the salt and salt-call
          commands. This does NOT  blacklist  commands  called  from  states  or  shell  commands
          executed from other modules.

       New in version 2016.11.0.

          cmd_blacklist_glob:
            - 'rm * '
            - 'cat /etc/* '

   cmd_whitelist_glob
       Default: []

       If  cmd_whitelist_glob  is  enabled then any shell command called over remote execution or
       via salt-call will be checked against the glob matches  found  in  the  cmd_whitelist_glob
       list and any shell command NOT found in the list will be blocked. If cmd_whitelist_glob is
       NOT SET, then all shell commands are permitted.

       NOTE:
          This whitelist is only applied to direct executions made  by  the  salt  and  salt-call
          commands. This does NOT restrict commands called from states or shell commands executed
          from other modules.

       New in version 2016.11.0.

          cmd_whitelist_glob:
            - 'ls * '
            - 'cat /etc/fstab'

   ssl
       New in version 2016.11.0.

       Default: None

       TLS/SSL connection options. This  could  be  set  to  a  dictionary  containing  arguments
       corresponding  to  python  ssl.wrap_socket  method.  For  details  see  Tornado and Python
       documentation.

       Note: to set enum arguments values like  cert_reqs  and  ssl_version  use  constant  names
       without ssl module prefix: CERT_REQUIRED or PROTOCOL_SSLv23.

          ssl:
              keyfile: <path_to_keyfile>
              certfile: <path_to_certfile>
              ssl_version: PROTOCOL_TLSv1_2

   Reactor Settings
   reactor
       Default: []

       Defines a salt reactor. See the Reactor documentation for more information.

          reactor: []

   reactor_refresh_interval
       Default: 60

       The TTL for the cache of the reactor configuration.

          reactor_refresh_interval: 60

   reactor_worker_threads
       Default: 10

       The number of workers for the runner/wheel in the reactor.

          reactor_worker_threads: 10

   reactor_worker_hwm
       Default: 10000

       The queue size for workers in the reactor.

          reactor_worker_hwm: 10000

   Thread Settings
   multiprocessing
       Default: True

       If  multiprocessing  is  enabled  when  a  minion  receives a publication a new process is
       spawned and the command is executed therein.  Conversely, if multiprocessing  is  disabled
       the new publication will be run executed in a thread.

          multiprocessing: True

   process_count_max
       New in version 2018.3.0.

       Default: -1

       Limit  the  maximum amount of processes or threads created by salt-minion.  This is useful
       to avoid resource exhaustion in case the minion receives more publications than it is able
       to  handle, as it limits the number of spawned processes or threads. -1 is the default and
       disables the limit.

          process_count_max: -1

   Minion Logging Settings
   log_file
       Default: /var/log/salt/minion

       The minion log can be sent to a regular file, local path name, or network  location.   See
       also log_file.

       Examples:

          log_file: /var/log/salt/minion

          log_file: file:///dev/log

          log_file: udp://loghost:10514

   log_level
       Default: warning

       The level of messages to send to the console. See also log_level.

          log_level: warning

   log_level_logfile
       Default: info

       The  level of messages to send to the log file. See also log_level_logfile. When it is not
       set explicitly it will inherit the level set by log_level option.

          log_level_logfile: warning

   log_datefmt
       Default: %H:%M:%S

       The date and time format used in console log messages. See also log_datefmt.

          log_datefmt: '%H:%M:%S'

   log_datefmt_logfile
       Default: %Y-%m-%d %H:%M:%S

       The date and time format used in log file messages. See also log_datefmt_logfile.

          log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'

   log_fmt_console
       Default: [%(levelname)-8s] %(message)s

       The format of the console logging messages. See also log_fmt_console.

       NOTE:
          Log colors are enabled in log_fmt_console  rather  than  the  color  config  since  the
          logging system is loaded before the minion config.

          Console log colors are specified by these additional formatters:

          %(colorlevel)s %(colorname)s %(colorprocess)s %(colormsg)s

          Since it is desirable to include the surrounding brackets, ‘[‘ and ‘]’, in the coloring
          of the messages, these color formatters also include padding as well.  Color  LogRecord
          attributes are only available for console logging.

          log_fmt_console: '%(colorlevel)s %(colormsg)s'
          log_fmt_console: '[%(levelname)-8s] %(message)s'

   log_fmt_logfile
       Default: %(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s

       The format of the log file logging messages. See also log_fmt_logfile.

          log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s'

   log_granular_levels
       Default: {}

       This   can   be   used   to   control   logging   levels   more   specifically.  See  also
       log_granular_levels.

   zmq_monitor
       Default: False

       To diagnose issues with minions disconnecting or missing returns, ZeroMQ supports the  use
       of monitor sockets to log connection events. This feature requires ZeroMQ 4.0 or higher.

       To  enable ZeroMQ monitor sockets, set ‘zmq_monitor’ to ‘True’ and log at a debug level or
       higher.

       A sample log event is as follows:

          [DEBUG   ] ZeroMQ event: {'endpoint': 'tcp://127.0.0.1:4505', 'event': 512,
          'value': 27, 'description': 'EVENT_DISCONNECTED'}

       All events logged will include the string ZeroMQ  event.  A  connection  event  should  be
       logged  as  the  minion  starts up and initially connects to the master. If not, check for
       debug log level and that the necessary version of ZeroMQ is installed.

   tcp_authentication_retries
       Default: 5

       The number of times to retry authenticating with  the  salt  master  when  it  comes  back
       online.

       Zeromq does a lot to make sure when connections come back online that they reauthenticate.
       The tcp transport should try to connect with a new connection if the old one times out  on
       reauthenticating.

       -1 for infinite tries.

   failhard
       Default: False

       Set the global failhard flag. This informs all states to stop running states at the moment
       a single state fails

          failhard: False

   Include Configuration
       Configuration can be loaded from multiple files. The order in which this is done is:

       1. The minion config file itself

       2. The files matching the glob in default_include

       3. The files matching the glob in include (if defined)

       Each successive step overrides any values defined in the previous steps.   Therefore,  any
       config  options  defined in one of the default_include files would override the same value
       in the minion config file, and any options defined in include would override both.

   default_include
       Default: minion.d/*.conf

       The minion can include configuration  from  other  files.  Per  default  the  minion  will
       automatically  include all config files from minion.d/*.conf where minion.d is relative to
       the directory of the minion configuration file.

       NOTE:
          Salt creates files in the minion.d directory for its own use. These files are  prefixed
          with an underscore. A common example of this is the _schedule.conf file.

   include
       Default: not defined

       The  minion  can  include  configuration  from other files. To enable this, pass a list of
       paths to this option. The paths can be either relative or absolute; if relative, they  are
       considered  to  be  relative to the directory the main minion configuration file lives in.
       Paths can make use of shell-style globbing. If no files are matched by a  path  passed  to
       this option then the minion will log a warning message.

          # Include files from a minion.d directory in the same
          # directory as the minion config file
          include: minion.d/*.conf

          # Include a single extra file into the configuration
          include: /etc/roles/webserver

          # Include several files and the minion.d directory
          include:
            - extra_config
            - minion.d/*
            - /etc/roles/webserver

   Keepalive Settings
   tcp_keepalive
       Default: True

       The  tcp  keepalive  interval  to  set on TCP ports. This setting can be used to tune Salt
       connectivity issues in messy network environments with misbehaving firewalls.

          tcp_keepalive: True

   tcp_keepalive_cnt
       Default: -1

       Sets the ZeroMQ TCP keepalive count. May be used to tune issues with minion disconnects.

          tcp_keepalive_cnt: -1

   tcp_keepalive_idle
       Default: 300

       Sets ZeroMQ TCP keepalive idle. May be used to tune issues with minion disconnects.

          tcp_keepalive_idle: 300

   tcp_keepalive_intvl
       Default: -1

       Sets ZeroMQ TCP keepalive interval. May be used to tune issues with minion disconnects.

          tcp_keepalive_intvl': -1

   Frozen Build Update Settings
       These options control how salt.modules.saltutil.update() works with esky frozen apps.  For
       more information look at https://github.com/cloudmatrix/esky/.

   update_url
       Default: False (Update feature is disabled)

       The url to use when looking for application updates. Esky depends on directory listings to
       search for new versions. A webserver running on your Master is a good starting  point  for
       most setups.

          update_url: 'http://salt.example.com/minion-updates'

   update_restart_services
       Default: [] (service restarting on update is disabled)

       A  list  of  services to restart when the minion software is updated. This would typically
       just be a list containing the minion’s service name, but you may have other services  that
       need to go with it.

          update_restart_services: ['salt-minion']

   winrepo_cache_expire_min
       New in version 2016.11.0.

       Default: 0

       If  set  to  a  nonzero integer, then passing refresh=True to functions in the windows pkg
       module will not refresh the windows repo metadata if the age of the metadata is less  than
       this  value.  The  exception  to  this  is  pkg.refresh_db,  which will always refresh the
       metadata, regardless of age.

          winrepo_cache_expire_min: 1800

   winrepo_cache_expire_max
       New in version 2016.11.0.

       Default: 21600

       If the windows repo metadata is older than this value, and the metadata  is  needed  by  a
       function in the windows pkg module, the metadata will be refreshed.

          winrepo_cache_expire_max: 86400

   Minion Windows Software Repo Settings
       IMPORTANT:
          To  use  these config options, the minion can be running in master-minion or masterless
          mode.

   winrepo_source_dir
       Default: salt://win/repo-ng/

       The source location for the winrepo sls files.

          winrepo_source_dir: salt://win/repo-ng/

   Standalone Minion Windows Software Repo Settings
       IMPORTANT:
          To use these config options, the  minion  must  be  running  in  masterless  mode  (set
          file_client to local).

   winrepo_dir
       Changed  in  version 2015.8.0: Renamed from win_repo to winrepo_dir. Also, this option did
       not have a default value until this version.

       Default: C:\salt\srv\salt\win\repo

       Location on the minion where the winrepo_remotes are checked out.

          winrepo_dir: 'D:\winrepo'

   winrepo_dir_ng
       New in version 2015.8.0: A new ng repo was added.

       Default: /srv/salt/win/repo-ng

       Location on the minion where the winrepo_remotes_ng are checked out for 2015.8.0 and later
       minions.

          winrepo_dir_ng: /srv/salt/win/repo-ng

   winrepo_cachefile
       Changed  in  version 2015.8.0: Renamed from win_repo_cachefile to winrepo_cachefile. Also,
       this option did not have a default value until this version.

       Default: winrepo.p

       Path relative to winrepo_dir where the winrepo cache should be created.

          winrepo_cachefile: winrepo.p

   winrepo_remotes
       Changed in version 2015.8.0: Renamed from  win_gitrepos  to  winrepo_remotes.  Also,  this
       option did not have a default value until this version.

       New in version 2015.8.0.

       Default: ['https://github.com/saltstack/salt-winrepo.git']

       List of git repositories to checkout and include in the winrepo

          winrepo_remotes:
            - https://github.com/saltstack/salt-winrepo.git

       To  specify  a  specific revision of the repository, prepend a commit ID to the URL of the
       repository:

          winrepo_remotes:
            - '<commit_id> https://github.com/saltstack/salt-winrepo.git'

       Replace <commit_id> with the SHA1 hash of a commit ID. Specifying a commit ID is useful in
       that  it  allows  one  to  revert back to a previous version in the event that an error is
       introduced in the latest revision of the repo.

   winrepo_remotes_ng
       New in version 2015.8.0: A new ng repo was added.

       Default: ['https://github.com/saltstack/salt-winrepo-ng.git']

       List of git repositories to checkout and include in the winrepo  for  2015.8.0  and  later
       minions.

          winrepo_remotes_ng:
            - https://github.com/saltstack/salt-winrepo-ng.git

       To  specify  a  specific revision of the repository, prepend a commit ID to the URL of the
       repository:

          winrepo_remotes_ng:
            - '<commit_id> https://github.com/saltstack/salt-winrepo-ng.git'

       Replace <commit_id> with the SHA1 hash of a commit ID. Specifying a commit ID is useful in
       that  it  allows  one  to  revert back to a previous version in the event that an error is
       introduced in the latest revision of the repo.

   ssh_merge_pillar
       New in version 2018.3.2.

       Default: True

       Merges the compiled pillar data with the pillar data already available globally.  This  is
       useful  when using salt-ssh or salt-call --local and overriding the pillar data in a state
       file:

          apply_showpillar:
            module.run:
              - name: state.apply
              - mods:
                - showpillar
              - kwargs:
                    pillar:
                        test: "foo bar"

       If set to True the showpillar state will have access to the global pillar data.

       If set to False only the overriding pillar data will be available to the showpillar state.

   Configuring the Salt Proxy Minion
       The Salt system is amazingly simple and easy to configure. The two components of the  Salt
       system  each  have  a respective configuration file. The salt-master is configured via the
       master configuration file, and the salt-proxy is configured via  the  proxy  configuration
       file.

       SEE ALSO:
          example proxy minion configuration file

       The  Salt  Minion configuration is very simple. Typically, the only value that needs to be
       set is the master value so the proxy knows where to locate its master.

       By default, the salt-proxy configuration will be in /etc/salt/proxy.  A notable  exception
       is FreeBSD, where the configuration will be in /usr/local/etc/salt/proxy.

   Proxy-specific Configuration Options
   add_proxymodule_to_opts
       New in version 2015.8.2.

       Changed in version 2016.3.0.

       Default: False

       Add the proxymodule LazyLoader object to opts.

          add_proxymodule_to_opts: True

   proxy_merge_grains_in_module
       New in version 2016.3.0.

       Changed in version 2017.7.0.

       Default: True

       If  a proxymodule has a function called grains, then call it during regular grains loading
       and merge the results with the proxy’s grains dictionary.  Otherwise it  is  assumed  that
       the module calls the grains function in a custom way and returns the data elsewhere.

          proxy_merge_grains_in_module: False

   proxy_keep_alive
       New in version 2017.7.0.

       Default: True

       Whether  the  connection  with  the remote device should be restarted when dead. The proxy
       module must implement the alive function, otherwise the connection is considered alive.

          proxy_keep_alive: False

   proxy_keep_alive_interval
       New in version 2017.7.0.

       Default: 1

       The frequency of keepalive checks, in minutes. It requires the proxy_keep_alive option  to
       be enabled (and the proxy module to implement the alive function).

          proxy_keep_alive_interval: 5

   proxy_always_alive
       New in version 2017.7.0.

       Default: True

       Whether  the  proxy  should  maintain  the connection with the remote device. Similarly to
       proxy_keep_alive, this option is very specific to the design of the  proxy  module.   When
       proxy_always_alive  is  set  to  False,  the  connection  with  the  remote  device is not
       maintained and has to be closed after every command.

          proxy_always_alive: False

   proxy_merge_pillar_in_opts
       New in version 2017.7.3.

       Default: False.

       Whether the pillar data to be merged into the proxy configuration  options.   As  multiple
       proxies  can run on the same server, we may need different configuration options for each,
       while there’s one single configuration file.  The solution is merging the pillar  data  of
       each proxy minion into the opts.

          proxy_merge_pillar_in_opts: True

   proxy_deep_merge_pillar_in_opts
       New in version 2017.7.3.

       Default: False.

       Deep  merge  of  pillar  data into configuration opts.  This option is evaluated only when
       proxy_merge_pillar_in_opts is enabled.

   proxy_merge_pillar_in_opts_strategy
       New in version 2017.7.3.

       Default: smart.

       The strategy used when merging pillar configuration into opts.  This option  is  evaluated
       only when proxy_merge_pillar_in_opts is enabled.

   proxy_mines_pillar
       New in version 2017.7.3.

       Default: True.

       Allow enabling mine details using pillar data. This evaluates the mine configuration under
       the pillar, for the following regular minion options that are also  equally  available  on
       the proxy minion: mine_interval, and mine_functions.

   Configuration file examples
       · Example master configuration file

       · Example minion configuration file

       · Example proxy minion configuration file

   Example master configuration file
          ##### Primary configuration settings #####
          ##########################################
          # This configuration file is used to manage the behavior of the Salt Master.
          # Values that are commented out but have an empty line after the comment are
          # defaults that do not need to be set in the config. If there is no blank line
          # after the comment then the value is presented as an example and is not the
          # default.

          # Per default, the master will automatically include all config files
          # from master.d/*.conf (master.d is a directory in the same directory
          # as the main master config file).
          #default_include: master.d/*.conf

          # The address of the interface to bind to:
          #interface: 0.0.0.0

          # Whether the master should listen for IPv6 connections. If this is set to True,
          # the interface option must be adjusted, too. (For example: "interface: '::'")
          #ipv6: False

          # The tcp port used by the publisher:
          #publish_port: 4505

          # The user under which the salt master will run. Salt will update all
          # permissions to allow the specified user to run the master. The exception is
          # the job cache, which must be deleted if this user is changed. If the
          # modified files cause conflicts, set verify_env to False.
          #user: root

          # The port used by the communication interface. The ret (return) port is the
          # interface used for the file server, authentication, job returns, etc.
          #ret_port: 4506

          # Specify the location of the daemon process ID file:
          #pidfile: /var/run/salt-master.pid

          # The root directory prepended to these options: pki_dir, cachedir,
          # sock_dir, log_file, autosign_file, autoreject_file, extension_modules,
          # key_logfile, pidfile, autosign_grains_dir:
          #root_dir: /

          # The path to the master's configuration file.
          #conf_file: /etc/salt/master

          # Directory used to store public key data:
          #pki_dir: /etc/salt/pki/master

          # Key cache. Increases master speed for large numbers of accepted
          # keys. Available options: 'sched'. (Updates on a fixed schedule.)
          # Note that enabling this feature means that minions will not be
          # available to target for up to the length of the maintanence loop
          # which by default is 60s.
          #key_cache: ''

          # Directory to store job and cache data:
          # This directory may contain sensitive data and should be protected accordingly.
          #
          #cachedir: /var/cache/salt/master

          # Directory for custom modules. This directory can contain subdirectories for
          # each of Salt's module types such as "runners", "output", "wheel", "modules",
          # "states", "returners", "engines", "utils", etc.
          #extension_modules: /var/cache/salt/master/extmods

          # Directory for custom modules. This directory can contain subdirectories for
          # each of Salt's module types such as "runners", "output", "wheel", "modules",
          # "states", "returners", "engines", "utils", etc.
          # Like 'extension_modules' but can take an array of paths
          #module_dirs: []

          # Verify and set permissions on configuration directories at startup:
          #verify_env: True

          # Set the number of hours to keep old job information in the job cache:
          #keep_jobs: 24

          # The number of seconds to wait when the client is requesting information
          # about running jobs.
          #gather_job_timeout: 10

          # Set the default timeout for the salt command and api. The default is 5
          # seconds.
          #timeout: 5

          # The loop_interval option controls the seconds for the master's maintenance
          # process check cycle. This process updates file server backends, cleans the
          # job cache and executes the scheduler.
          #loop_interval: 60

          # Set the default outputter used by the salt command. The default is "nested".
          #output: nested

          # To set a list of additional directories to search for salt outputters, set the
          # outputter_dirs option.
          #outputter_dirs: []

          # Set the default output file used by the salt command. Default is to output
          # to the CLI and not to a file. Functions the same way as the "--out-file"
          # CLI option, only sets this to a single file for all salt commands.
          #output_file: None

          # Return minions that timeout when running commands like test.ping
          #show_timeout: True

          # Tell the client to display the jid when a job is published.
          #show_jid: False

          # By default, output is colored. To disable colored output, set the color value
          # to False.
          #color: True

          # Do not strip off the colored output from nested results and state outputs
          # (true by default).
          # strip_colors: False

          # To display a summary of the number of minions targeted, the number of
          # minions returned, and the number of minions that did not return, set the
          # cli_summary value to True. (False by default.)
          #
          #cli_summary: False

          # Set the directory used to hold unix sockets:
          #sock_dir: /var/run/salt/master

          # The master can take a while to start up when lspci and/or dmidecode is used
          # to populate the grains for the master. Enable if you want to see GPU hardware
          # data for your master.
          # enable_gpu_grains: False

          # The master maintains a job cache. While this is a great addition, it can be
          # a burden on the master for larger deployments (over 5000 minions).
          # Disabling the job cache will make previously executed jobs unavailable to
          # the jobs system and is not generally recommended.
          #job_cache: True

          # Cache minion grains, pillar and mine data via the cache subsystem in the
          # cachedir or a database.
          #minion_data_cache: True

          # Cache subsystem module to use for minion data cache.
          #cache: localfs
          # Enables a fast in-memory cache booster and sets the expiration time.
          #memcache_expire_seconds: 0
          # Set a memcache limit in items (bank + key) per cache storage (driver + driver_opts).
          #memcache_max_items: 1024
          # Each time a cache storage got full cleanup all the expired items not just the oldest one.
          #memcache_full_cleanup: False
          # Enable collecting the memcache stats and log it on `debug` log level.
          #memcache_debug: False

          # Store all returns in the given returner.
          # Setting this option requires that any returner-specific configuration also
          # be set. See various returners in salt/returners for details on required
          # configuration values. (See also, event_return_queue below.)
          #
          #event_return: mysql

          # On busy systems, enabling event_returns can cause a considerable load on
          # the storage system for returners. Events can be queued on the master and
          # stored in a batched fashion using a single transaction for multiple events.
          # By default, events are not queued.
          #event_return_queue: 0

          # Only return events matching tags in a whitelist, supports glob matches.
          #event_return_whitelist:
          #  - salt/master/a_tag
          #  - salt/run/*/ret

          # Store all event returns **except** the tags in a blacklist, supports globs.
          #event_return_blacklist:
          #  - salt/master/not_this_tag
          #  - salt/wheel/*/ret

          # Passing very large events can cause the minion to consume large amounts of
          # memory. This value tunes the maximum size of a message allowed onto the
          # master event bus. The value is expressed in bytes.
          #max_event_size: 1048576

          # By default, the master AES key rotates every 24 hours. The next command
          # following a key rotation will trigger a key refresh from the minion which may
          # result in minions which do not respond to the first command after a key refresh.
          #
          # To tell the master to ping all minions immediately after an AES key refresh, set
          # ping_on_rotate to True. This should mitigate the issue where a minion does not
          # appear to initially respond after a key is rotated.
          #
          # Note that ping_on_rotate may cause high load on the master immediately after
          # the key rotation event as minions reconnect. Consider this carefully if this
          # salt master is managing a large number of minions.
          #
          # If disabled, it is recommended to handle this event by listening for the
          # 'aes_key_rotate' event with the 'key' tag and acting appropriately.
          # ping_on_rotate: False

          # By default, the master deletes its cache of minion data when the key for that
          # minion is removed. To preserve the cache after key deletion, set
          # 'preserve_minion_cache' to True.
          #
          # WARNING: This may have security implications if compromised minions auth with
          # a previous deleted minion ID.
          #preserve_minion_cache: False

          # Allow or deny minions from requesting their own key revocation
          #allow_minion_key_revoke: True

          # If max_minions is used in large installations, the master might experience
          # high-load situations because of having to check the number of connected
          # minions for every authentication. This cache provides the minion-ids of
          # all connected minions to all MWorker-processes and greatly improves the
          # performance of max_minions.
          # con_cache: False

          # The master can include configuration from other files. To enable this,
          # pass a list of paths to this option. The paths can be either relative or
          # absolute; if relative, they are considered to be relative to the directory
          # the main master configuration file lives in (this file). Paths can make use
          # of shell-style globbing. If no files are matched by a path passed to this
          # option, then the master will log a warning message.
          #
          # Include a config file from some other path:
          # include: /etc/salt/extra_config
          #
          # Include config from several files and directories:
          # include:
          #   - /etc/salt/extra_config

          #####  Large-scale tuning settings   #####
          ##########################################
          # Max open files
          #
          # Each minion connecting to the master uses AT LEAST one file descriptor, the
          # master subscription connection. If enough minions connect you might start
          # seeing on the console (and then salt-master crashes):
          #   Too many open files (tcp_listener.cpp:335)
          #   Aborted (core dumped)
          #
          # By default this value will be the one of `ulimit -Hn`, ie, the hard limit for
          # max open files.
          #
          # If you wish to set a different value than the default one, uncomment and
          # configure this setting. Remember that this value CANNOT be higher than the
          # hard limit. Raising the hard limit depends on your OS and/or distribution,
          # a good way to find the limit is to search the internet. For example:
          #   raise max open files hard limit debian
          #
          #max_open_files: 100000

          # The number of worker threads to start. These threads are used to manage
          # return calls made from minions to the master. If the master seems to be
          # running slowly, increase the number of threads. This setting can not be
          # set lower than 3.
          #worker_threads: 5

          # Set the ZeroMQ high water marks
          # http://api.zeromq.org/3-2:zmq-setsockopt

          # The listen queue size / backlog
          #zmq_backlog: 1000

          # The publisher interface ZeroMQPubServerChannel
          #pub_hwm: 1000

          # The master may allocate memory per-event and not
          # reclaim it.
          # To set a high-water mark for memory allocation, use
          # ipc_write_buffer to set a high-water mark for message
          # buffering.
          # Value: In bytes. Set to 'dynamic' to have Salt select
          # a value for you. Default is disabled.
          # ipc_write_buffer: 'dynamic'

          # These two batch settings, batch_safe_limit and batch_safe_size, are used to
          # automatically switch to a batch mode execution. If a command would have been
          # sent to more than <batch_safe_limit> minions, then run the command in
          # batches of <batch_safe_size>. If no batch_safe_size is specified, a default
          # of 8 will be used. If no batch_safe_limit is specified, then no automatic
          # batching will occur.
          #batch_safe_limit: 100
          #batch_safe_size: 8

          # Master stats enables stats events to be fired from the master at close
          # to the defined interval
          #master_stats: False
          #master_stats_event_iter: 60

          #####        Security settings       #####
          ##########################################
          # Enable passphrase protection of Master private key.  Although a string value
          # is acceptable; passwords should be stored in an external vaulting mechanism
          # and retrieved via sdb. See https://docs.saltstack.com/en/latest/topics/sdb/.
          # Passphrase protection is off by default but an example of an sdb profile and
          # query is as follows.
          # masterkeyring:
          #  driver: keyring
          #  service: system
          #
          # key_pass: sdb://masterkeyring/key_pass

          # Enable passphrase protection of the Master signing_key. This only applies if
          # master_sign_pubkey is set to True.  This is disabled by default.
          # master_sign_pubkey: True
          # signing_key_pass: sdb://masterkeyring/signing_pass

          # Enable "open mode", this mode still maintains encryption, but turns off
          # authentication, this is only intended for highly secure environments or for
          # the situation where your keys end up in a bad state. If you run in open mode
          # you do so at your own risk!
          #open_mode: False

          # Enable auto_accept, this setting will automatically accept all incoming
          # public keys from the minions. Note that this is insecure.
          #auto_accept: False

          # The size of key that should be generated when creating new keys.
          #keysize: 2048

          # Time in minutes that an incoming public key with a matching name found in
          # pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys
          # are removed when the master checks the minion_autosign directory.
          # 0 equals no timeout
          # autosign_timeout: 120

          # If the autosign_file is specified, incoming keys specified in the
          # autosign_file will be automatically accepted. This is insecure.  Regular
          # expressions as well as globing lines are supported. The file must be readonly
          # except for the owner. Use permissive_pki_access to allow the group write access.
          #autosign_file: /etc/salt/autosign.conf

          # Works like autosign_file, but instead allows you to specify minion IDs for
          # which keys will automatically be rejected. Will override both membership in
          # the autosign_file and the auto_accept setting.
          #autoreject_file: /etc/salt/autoreject.conf

          # If the autosign_grains_dir is specified, incoming keys from minons with grain
          # values matching those defined in files in this directory will be accepted
          # automatically. This is insecure. Minions need to be configured to send the grains.
          #autosign_grains_dir: /etc/salt/autosign_grains

          # Enable permissive access to the salt keys. This allows you to run the
          # master or minion as root, but have a non-root group be given access to
          # your pki_dir. To make the access explicit, root must belong to the group
          # you've given access to. This is potentially quite insecure. If an autosign_file
          # is specified, enabling permissive_pki_access will allow group access to that
          # specific file.
          #permissive_pki_access: False

          # Allow users on the master access to execute specific commands on minions.
          # This setting should be treated with care since it opens up execution
          # capabilities to non root users. By default this capability is completely
          # disabled.
          #publisher_acl:
          #  larry:
          #    - test.ping
          #    - network.*
          #
          # Blacklist any of the following users or modules
          #
          # This example would blacklist all non sudo users, including root from
          # running any commands. It would also blacklist any use of the "cmd"
          # module. This is completely disabled by default.
          #
          #
          # Check the list of configured users in client ACL against users on the
          # system and throw errors if they do not exist.
          #client_acl_verify: True
          #
          #publisher_acl_blacklist:
          #  users:
          #    - root
          #    - '^(?!sudo_).*$'   #  all non sudo users
          #  modules:
          #    - cmd

          # Enforce publisher_acl & publisher_acl_blacklist when users have sudo
          # access to the salt command.
          #
          #sudo_acl: False

          # The external auth system uses the Salt auth modules to authenticate and
          # validate users to access areas of the Salt system.
          #external_auth:
          #  pam:
          #    fred:
          #      - test.*
          #
          # Time (in seconds) for a newly generated token to live. Default: 12 hours
          #token_expire: 43200
          #
          # Allow eauth users to specify the expiry time of the tokens they generate.
          # A boolean applies to all users or a dictionary of whitelisted eauth backends
          # and usernames may be given.
          # token_expire_user_override:
          #   pam:
          #     - fred
          #     - tom
          #   ldap:
          #     - gary
          #
          #token_expire_user_override: False

          # Set to True to enable keeping the calculated user's auth list in the token
          # file. This is disabled by default and the auth list is calculated or requested
          # from the eauth driver each time.
          #keep_acl_in_token: False

          # Auth subsystem module to use to get authorized access list for a user. By default it's
          # the same module used for external authentication.
          #eauth_acl_module: django

          # Allow minions to push files to the master. This is disabled by default, for
          # security purposes.
          #file_recv: False

          # Set a hard-limit on the size of the files that can be pushed to the master.
          # It will be interpreted as megabytes. Default: 100
          #file_recv_max_size: 100

          # Signature verification on messages published from the master.
          # This causes the master to cryptographically sign all messages published to its event
          # bus, and minions then verify that signature before acting on the message.
          #
          # This is False by default.
          #
          # Note that to facilitate interoperability with masters and minions that are different
          # versions, if sign_pub_messages is True but a message is received by a minion with
          # no signature, it will still be accepted, and a warning message will be logged.
          # Conversely, if sign_pub_messages is False, but a minion receives a signed
          # message it will be accepted, the signature will not be checked, and a warning message
          # will be logged. This behavior went away in Salt 2014.1.0 and these two situations
          # will cause minion to throw an exception and drop the message.
          # sign_pub_messages: False

          # Signature verification on messages published from minions
          # This requires that minions cryptographically sign the messages they
          # publish to the master.  If minions are not signing, then log this information
          # at loglevel 'INFO' and drop the message without acting on it.
          # require_minion_sign_messages: False

          # The below will drop messages when their signatures do not validate.
          # Note that when this option is False but `require_minion_sign_messages` is True
          # minions MUST sign their messages but the validity of their signatures
          # is ignored.
          # These two config options exist so a Salt infrastructure can be moved
          # to signing minion messages gradually.
          # drop_messages_signature_fail: False

          # Use TLS/SSL encrypted connection between master and minion.
          # Can be set to a dictionary containing keyword arguments corresponding to Python's
          # 'ssl.wrap_socket' method.
          # Default is None.
          #ssl:
          #    keyfile: <path_to_keyfile>
          #    certfile: <path_to_certfile>
          #    ssl_version: PROTOCOL_TLSv1_2

          #####     Salt-SSH Configuration     #####
          ##########################################
          # Define the default salt-ssh roster module to use
          #roster: flat

          # Pass in an alternative location for the salt-ssh `flat` roster file
          #roster_file: /etc/salt/roster

          # Define locations for `flat` roster files so they can be chosen when using Salt API.
          # An administrator can place roster files into these locations. Then when
          # calling Salt API, parameter 'roster_file' should contain a relative path to
          # these locations. That is, "roster_file=/foo/roster" will be resolved as
          # "/etc/salt/roster.d/foo/roster" etc. This feature prevents passing insecure
          # custom rosters through the Salt API.
          #
          #rosters:
          # - /etc/salt/roster.d
          # - /opt/salt/some/more/rosters

          # The ssh password to log in with.
          #ssh_passwd: ''

          #The target system's ssh port number.
          #ssh_port: 22

          # Comma-separated list of ports to scan.
          #ssh_scan_ports: 22

          # Scanning socket timeout for salt-ssh.
          #ssh_scan_timeout: 0.01

          # Boolean to run command via sudo.
          #ssh_sudo: False

          # Number of seconds to wait for a response when establishing an SSH connection.
          #ssh_timeout: 60

          # The user to log in as.
          #ssh_user: root

          # The log file of the salt-ssh command:
          #ssh_log_file: /var/log/salt/ssh

          # Pass in minion option overrides that will be inserted into the SHIM for
          # salt-ssh calls. The local minion config is not used for salt-ssh. Can be
          # overridden on a per-minion basis in the roster (`minion_opts`)
          #ssh_minion_opts:
          #  gpg_keydir: /root/gpg

          # Set this to True to default to using ~/.ssh/id_rsa for salt-ssh
          # authentication with minions
          #ssh_use_home_key: False

          # Set this to True to default salt-ssh to run with ``-o IdentitiesOnly=yes``.
          # This option is intended for situations where the ssh-agent offers many
          # different identities and allows ssh to ignore those identities and use the
          # only one specified in options.
          #ssh_identities_only: False

          # List-only nodegroups for salt-ssh. Each group must be formed as either a
          # comma-separated list, or a YAML list. This option is useful to group minions
          # into easy-to-target groups when using salt-ssh. These groups can then be
          # targeted with the normal -N argument to salt-ssh.
          #ssh_list_nodegroups: {}

          # salt-ssh has the ability to update the flat roster file if a minion is not
          # found in the roster.  Set this to True to enable it.
          #ssh_update_roster: False

          #####    Master Module Management    #####
          ##########################################
          # Manage how master side modules are loaded.

          # Add any additional locations to look for master runners:
          #runner_dirs: []

          # Add any additional locations to look for master utils:
          #utils_dirs: []

          # Enable Cython for master side modules:
          #cython_enable: False

          #####      State System settings     #####
          ##########################################
          # The state system uses a "top" file to tell the minions what environment to
          # use and what modules to use. The state_top file is defined relative to the
          # root of the base environment as defined in "File Server settings" below.
          #state_top: top.sls

          # The master_tops option replaces the external_nodes option by creating
          # a plugable system for the generation of external top data. The external_nodes
          # option is deprecated by the master_tops option.
          #
          # To gain the capabilities of the classic external_nodes system, use the
          # following configuration:
          # master_tops:
          #   ext_nodes: <Shell command which returns yaml>
          #
          #master_tops: {}

          # The renderer to use on the minions to render the state data
          #renderer: yaml_jinja

          # Default Jinja environment options for all templates except sls templates
          #jinja_env:
          #  block_start_string: '{%'
          #  block_end_string: '%}'
          #  variable_start_string: '{{'
          #  variable_end_string: '}}'
          #  comment_start_string: '{#'
          #  comment_end_string: '#}'
          #  line_statement_prefix:
          #  line_comment_prefix:
          #  trim_blocks: False
          #  lstrip_blocks: False
          #  newline_sequence: '\n'
          #  keep_trailing_newline: False

          # Jinja environment options for sls templates
          #jinja_sls_env:
          #  block_start_string: '{%'
          #  block_end_string: '%}'
          #  variable_start_string: '{{'
          #  variable_end_string: '}}'
          #  comment_start_string: '{#'
          #  comment_end_string: '#}'
          #  line_statement_prefix:
          #  line_comment_prefix:
          #  trim_blocks: False
          #  lstrip_blocks: False
          #  newline_sequence: '\n'
          #  keep_trailing_newline: False

          # The failhard option tells the minions to stop immediately after the first
          # failure detected in the state execution, defaults to False
          #failhard: False

          # The state_verbose and state_output settings can be used to change the way
          # state system data is printed to the display. By default all data is printed.
          # The state_verbose setting can be set to True or False, when set to False
          # all data that has a result of True and no changes will be suppressed.
          #state_verbose: True

          # The state_output setting controls which results will be output full multi line
          # full, terse - each state will be full/terse
          # mixed - only states with errors will be full
          # changes - states with changes and errors will be full
          # full_id, mixed_id, changes_id and terse_id are also allowed;
          # when set, the state ID will be used as name in the output
          #state_output: full

          # The state_output_diff setting changes whether or not the output from
          # successful states is returned. Useful when even the terse output of these
          # states is cluttering the logs. Set it to True to ignore them.
          #state_output_diff: False

          # Automatically aggregate all states that have support for mod_aggregate by
          # setting to 'True'. Or pass a list of state module names to automatically
          # aggregate just those types.
          #
          # state_aggregate:
          #   - pkg
          #
          #state_aggregate: False

          # Send progress events as each function in a state run completes execution
          # by setting to 'True'. Progress events are in the format
          # 'salt/job/<JID>/prog/<MID>/<RUN NUM>'.
          #state_events: False

          #####      File Server settings      #####
          ##########################################
          # Salt runs a lightweight file server written in zeromq to deliver files to
          # minions. This file server is built into the master daemon and does not
          # require a dedicated port.

          # The file server works on environments passed to the master, each environment
          # can have multiple root directories, the subdirectories in the multiple file
          # roots cannot match, otherwise the downloaded files will not be able to be
          # reliably ensured. A base environment is required to house the top file.
          # Example:
          # file_roots:
          #   base:
          #     - /srv/salt/
          #   dev:
          #     - /srv/salt/dev/services
          #     - /srv/salt/dev/states
          #   prod:
          #     - /srv/salt/prod/services
          #     - /srv/salt/prod/states
          #
          #file_roots:
          #  base:
          #    - /srv/salt
          #

          # The master_roots setting configures a master-only copy of the file_roots dictionary,
          # used by the state compiler.
          #master_roots: /srv/salt-master

          # When using multiple environments, each with their own top file, the
          # default behaviour is an unordered merge. To prevent top files from
          # being merged together and instead to only use the top file from the
          # requested environment, set this value to 'same'.
          #top_file_merging_strategy: merge

          # To specify the order in which environments are merged, set the ordering
          # in the env_order option. Given a conflict, the last matching value will
          # win.
          #env_order: ['base', 'dev', 'prod']

          # If top_file_merging_strategy is set to 'same' and an environment does not
          # contain a top file, the top file in the environment specified by default_top
          # will be used instead.
          #default_top: base

          # The hash_type is the hash to use when discovering the hash of a file on
          # the master server. The default is sha256, but md5, sha1, sha224, sha384 and
          # sha512 are also supported.
          #
          # WARNING: While md5 and sha1 are also supported, do not use them due to the
          # high chance of possible collisions and thus security breach.
          #
          # Prior to changing this value, the master should be stopped and all Salt
          # caches should be cleared.
          #hash_type: sha256

          # The buffer size in the file server can be adjusted here:
          #file_buffer_size: 1048576

          # A regular expression (or a list of expressions) that will be matched
          # against the file path before syncing the modules and states to the minions.
          # This includes files affected by the file.recurse state.
          # For example, if you manage your custom modules and states in subversion
          # and don't want all the '.svn' folders and content synced to your minions,
          # you could set this to '/\.svn($|/)'. By default nothing is ignored.
          #file_ignore_regex:
          #  - '/\.svn($|/)'
          #  - '/\.git($|/)'

          # A file glob (or list of file globs) that will be matched against the file
          # path before syncing the modules and states to the minions. This is similar
          # to file_ignore_regex above, but works on globs instead of regex. By default
          # nothing is ignored.
          # file_ignore_glob:
          #  - '*.pyc'
          #  - '*/somefolder/*.bak'
          #  - '*.swp'

          # File Server Backend
          #
          # Salt supports a modular fileserver backend system, this system allows
          # the salt master to link directly to third party systems to gather and
          # manage the files available to minions. Multiple backends can be
          # configured and will be searched for the requested file in the order in which
          # they are defined here. The default setting only enables the standard backend
          # "roots" which uses the "file_roots" option.
          #fileserver_backend:
          #  - roots
          #
          # To use multiple backends list them in the order they are searched:
          #fileserver_backend:
          #  - git
          #  - roots
          #
          # Uncomment the line below if you do not want the file_server to follow
          # symlinks when walking the filesystem tree. This is set to True
          # by default. Currently this only applies to the default roots
          # fileserver_backend.
          #fileserver_followsymlinks: False
          #
          # Uncomment the line below if you do not want symlinks to be
          # treated as the files they are pointing to. By default this is set to
          # False. By uncommenting the line below, any detected symlink while listing
          # files on the Master will not be returned to the Minion.
          #fileserver_ignoresymlinks: True
          #
          # By default, the Salt fileserver recurses fully into all defined environments
          # to attempt to find files. To limit this behavior so that the fileserver only
          # traverses directories with SLS files and special Salt directories like _modules,
          # enable the option below. This might be useful for installations where a file root
          # has a very large number of files and performance is impacted. Default is False.
          # fileserver_limit_traversal: False
          #
          # The fileserver can fire events off every time the fileserver is updated,
          # these are disabled by default, but can be easily turned on by setting this
          # flag to True
          #fileserver_events: False

          # Git File Server Backend Configuration
          #
          # Optional parameter used to specify the provider to be used for gitfs. Must be
          # either pygit2 or gitpython. If unset, then both will be tried (in that
          # order), and the first one with a compatible version installed will be the
          # provider that is used.
          #
          #gitfs_provider: pygit2

          # Along with gitfs_password, is used to authenticate to HTTPS remotes.
          # gitfs_user: ''

          # Along with gitfs_user, is used to authenticate to HTTPS remotes.
          # This parameter is not required if the repository does not use authentication.
          #gitfs_password: ''

          # By default, Salt will not authenticate to an HTTP (non-HTTPS) remote.
          # This parameter enables authentication over HTTP. Enable this at your own risk.
          #gitfs_insecure_auth: False

          # Along with gitfs_privkey (and optionally gitfs_passphrase), is used to
          # authenticate to SSH remotes. This parameter (or its per-remote counterpart)
          # is required for SSH remotes.
          #gitfs_pubkey: ''

          # Along with gitfs_pubkey (and optionally gitfs_passphrase), is used to
          # authenticate to SSH remotes. This parameter (or its per-remote counterpart)
          # is required for SSH remotes.
          #gitfs_privkey: ''

          # This parameter is optional, required only when the SSH key being used to
          # authenticate is protected by a passphrase.
          #gitfs_passphrase: ''

          # When using the git fileserver backend at least one git remote needs to be
          # defined. The user running the salt master will need read access to the repo.
          #
          # The repos will be searched in order to find the file requested by a client
          # and the first repo to have the file will return it.
          # When using the git backend branches and tags are translated into salt
          # environments.
          # Note: file:// repos will be treated as a remote, so refs you want used must
          # exist in that repo as *local* refs.
          #gitfs_remotes:
          #  - git://github.com/saltstack/salt-states.git
          #  - file:///var/git/saltmaster
          #
          # The gitfs_ssl_verify option specifies whether to ignore ssl certificate
          # errors when contacting the gitfs backend. You might want to set this to
          # false if you're using a git backend that uses a self-signed certificate but
          # keep in mind that setting this flag to anything other than the default of True
          # is a security concern, you may want to try using the ssh transport.
          #gitfs_ssl_verify: True
          #
          # The gitfs_root option gives the ability to serve files from a subdirectory
          # within the repository. The path is defined relative to the root of the
          # repository and defaults to the repository root.
          #gitfs_root: somefolder/otherfolder
          #
          # The refspecs fetched by gitfs remotes
          #gitfs_refspecs:
          #  - '+refs/heads/*:refs/remotes/origin/*'
          #  - '+refs/tags/*:refs/tags/*'
          #
          #
          #####         Pillar settings        #####
          ##########################################
          # Salt Pillars allow for the building of global data that can be made selectively
          # available to different minions based on minion grain filtering. The Salt
          # Pillar is laid out in the same fashion as the file server, with environments,
          # a top file and sls files. However, pillar data does not need to be in the
          # highstate format, and is generally just key/value pairs.
          #pillar_roots:
          #  base:
          #    - /srv/pillar
          #
          #ext_pillar:
          #  - hiera: /etc/hiera.yaml
          #  - cmd_yaml: cat /etc/salt/yaml

          # A list of paths to be recursively decrypted during pillar compilation.
          # Entries in this list can be formatted either as a simple string, or as a
          # key/value pair, with the key being the pillar location, and the value being
          # the renderer to use for pillar decryption. If the former is used, the
          # renderer specified by decrypt_pillar_default will be used.
          #decrypt_pillar:
          #  - 'foo:bar': gpg
          #  - 'lorem:ipsum:dolor'

          # The delimiter used to distinguish nested data structures in the
          # decrypt_pillar option.
          #decrypt_pillar_delimiter: ':'

          # The default renderer used for decryption, if one is not specified for a given
          # pillar key in decrypt_pillar.
          #decrypt_pillar_default: gpg

          # List of renderers which are permitted to be used for pillar decryption.
          #decrypt_pillar_renderers:
          #  - gpg

          # The ext_pillar_first option allows for external pillar sources to populate
          # before file system pillar. This allows for targeting file system pillar from
          # ext_pillar.
          #ext_pillar_first: False

          # The external pillars permitted to be used on-demand using pillar.ext
          #on_demand_ext_pillar:
          #  - libvirt
          #  - virtkey

          # The pillar_gitfs_ssl_verify option specifies whether to ignore ssl certificate
          # errors when contacting the pillar gitfs backend. You might want to set this to
          # false if you're using a git backend that uses a self-signed certificate but
          # keep in mind that setting this flag to anything other than the default of True
          # is a security concern, you may want to try using the ssh transport.
          #pillar_gitfs_ssl_verify: True

          # The pillar_opts option adds the master configuration file data to a dict in
          # the pillar called "master". This is used to set simple configurations in the
          # master config file that can then be used on minions.
          #pillar_opts: False

          # The pillar_safe_render_error option prevents the master from passing pillar
          # render errors to the minion. This is set on by default because the error could
          # contain templating data which would give that minion information it shouldn't
          # have, like a password! When set true the error message will only show:
          #   Rendering SLS 'my.sls' failed. Please see master log for details.
          #pillar_safe_render_error: True

          # The pillar_source_merging_strategy option allows you to configure merging strategy
          # between different sources. It accepts five values: none, recurse, aggregate, overwrite,
          # or smart. None will not do any merging at all. Recurse will merge recursively mapping of data.
          # Aggregate instructs aggregation of elements between sources that use the #!yamlex renderer. Overwrite
          # will overwrite elements according the order in which they are processed. This is
          # behavior of the 2014.1 branch and earlier. Smart guesses the best strategy based
          # on the "renderer" setting and is the default value.
          #pillar_source_merging_strategy: smart

          # Recursively merge lists by aggregating them instead of replacing them.
          #pillar_merge_lists: False

          # Set this option to True to force the pillarenv to be the same as the effective
          # saltenv when running states. If pillarenv is specified this option will be
          # ignored.
          #pillarenv_from_saltenv: False

          # Set this option to 'True' to force a 'KeyError' to be raised whenever an
          # attempt to retrieve a named value from pillar fails. When this option is set
          # to 'False', the failed attempt returns an empty string. Default is 'False'.
          #pillar_raise_on_missing: False

          # Git External Pillar (git_pillar) Configuration Options
          #
          # Specify the provider to be used for git_pillar. Must be either pygit2 or
          # gitpython. If unset, then both will be tried in that same order, and the
          # first one with a compatible version installed will be the provider that
          # is used.
          #git_pillar_provider: pygit2

          # If the desired branch matches this value, and the environment is omitted
          # from the git_pillar configuration, then the environment for that git_pillar
          # remote will be base.
          #git_pillar_base: master

          # If the branch is omitted from a git_pillar remote, then this branch will
          # be used instead
          #git_pillar_branch: master

          # Environment to use for git_pillar remotes. This is normally derived from
          # the branch/tag (or from a per-remote env parameter), but if set this will
          # override the process of deriving the env from the branch/tag name.
          #git_pillar_env: ''

          # Path relative to the root of the repository where the git_pillar top file
          # and SLS files are located.
          #git_pillar_root: ''

          # Specifies whether or not to ignore SSL certificate errors when contacting
          # the remote repository.
          #git_pillar_ssl_verify: False

          # When set to False, if there is an update/checkout lock for a git_pillar
          # remote and the pid written to it is not running on the master, the lock
          # file will be automatically cleared and a new lock will be obtained.
          #git_pillar_global_lock: True

          # Git External Pillar Authentication Options
          #
          # Along with git_pillar_password, is used to authenticate to HTTPS remotes.
          #git_pillar_user: ''

          # Along with git_pillar_user, is used to authenticate to HTTPS remotes.
          # This parameter is not required if the repository does not use authentication.
          #git_pillar_password: ''

          # By default, Salt will not authenticate to an HTTP (non-HTTPS) remote.
          # This parameter enables authentication over HTTP.
          #git_pillar_insecure_auth: False

          # Along with git_pillar_privkey (and optionally git_pillar_passphrase),
          # is used to authenticate to SSH remotes.
          #git_pillar_pubkey: ''

          # Along with git_pillar_pubkey (and optionally git_pillar_passphrase),
          # is used to authenticate to SSH remotes.
          #git_pillar_privkey: ''

          # This parameter is optional, required only when the SSH key being used
          # to authenticate is protected by a passphrase.
          #git_pillar_passphrase: ''

          # The refspecs fetched by git_pillar remotes
          #git_pillar_refspecs:
          #  - '+refs/heads/*:refs/remotes/origin/*'
          #  - '+refs/tags/*:refs/tags/*'

          # A master can cache pillars locally to bypass the expense of having to render them
          # for each minion on every request. This feature should only be enabled in cases
          # where pillar rendering time is known to be unsatisfactory and any attendant security
          # concerns about storing pillars in a master cache have been addressed.
          #
          # When enabling this feature, be certain to read through the additional ``pillar_cache_*``
          # configuration options to fully understand the tunable parameters and their implications.
          #
          # Note: setting ``pillar_cache: True`` has no effect on targeting Minions with Pillars.
          # See https://docs.saltstack.com/en/latest/topics/targeting/pillar.html
          #pillar_cache: False

          # If and only if a master has set ``pillar_cache: True``, the cache TTL controls the amount
          # of time, in seconds, before the cache is considered invalid by a master and a fresh
          # pillar is recompiled and stored.
          #pillar_cache_ttl: 3600

          # If and only if a master has set `pillar_cache: True`, one of several storage providers
          # can be utilized.
          #
          # `disk`: The default storage backend. This caches rendered pillars to the master cache.
          #         Rendered pillars are serialized and deserialized as msgpack structures for speed.
          #         Note that pillars are stored UNENCRYPTED. Ensure that the master cache
          #         has permissions set appropriately. (Same defaults are provided.)
          #
          # memory: [EXPERIMENTAL] An optional backend for pillar caches which uses a pure-Python
          #         in-memory data structure for maximal performance. There are several caveats,
          #         however. First, because each master worker contains its own in-memory cache,
          #         there is no guarantee of cache consistency between minion requests. This
          #         works best in situations where the pillar rarely if ever changes. Secondly,
          #         and perhaps more importantly, this means that unencrypted pillars will
          #         be accessible to any process which can examine the memory of the ``salt-master``!
          #         This may represent a substantial security risk.
          #
          #pillar_cache_backend: disk

          ######        Reactor Settings        #####
          ###########################################
          # Define a salt reactor. See https://docs.saltstack.com/en/latest/topics/reactor/
          #reactor: []

          #Set the TTL for the cache of the reactor configuration.
          #reactor_refresh_interval: 60

          #Configure the number of workers for the runner/wheel in the reactor.
          #reactor_worker_threads: 10

          #Define the queue size for workers in the reactor.
          #reactor_worker_hwm: 10000

          #####          Syndic settings       #####
          ##########################################
          # The Salt syndic is used to pass commands through a master from a higher
          # master. Using the syndic is simple. If this is a master that will have
          # syndic servers(s) below it, then set the "order_masters" setting to True.
          #
          # If this is a master that will be running a syndic daemon for passthrough, then
          # the "syndic_master" setting needs to be set to the location of the master server
          # to receive commands from.

          # Set the order_masters setting to True if this master will command lower
          # masters' syndic interfaces.
          #order_masters: False

          # If this master will be running a salt syndic daemon, syndic_master tells
          # this master where to receive commands from.
          #syndic_master: masterofmasters

          # This is the 'ret_port' of the MasterOfMaster:
          #syndic_master_port: 4506

          # PID file of the syndic daemon:
          #syndic_pidfile: /var/run/salt-syndic.pid

          # The log file of the salt-syndic daemon:
          #syndic_log_file: /var/log/salt/syndic

          # The behaviour of the multi-syndic when connection to a master of masters failed.
          # Can specify ``random`` (default) or ``ordered``. If set to ``random``, masters
          # will be iterated in random order. If ``ordered`` is specified, the configured
          # order will be used.
          #syndic_failover: random

          # The number of seconds for the salt client to wait for additional syndics to
          # check in with their lists of expected minions before giving up.
          #syndic_wait: 5

          #####      Peer Publish settings     #####
          ##########################################
          # Salt minions can send commands to other minions, but only if the minion is
          # allowed to. By default "Peer Publication" is disabled, and when enabled it
          # is enabled for specific minions and specific commands. This allows secure
          # compartmentalization of commands based on individual minions.

          # The configuration uses regular expressions to match minions and then a list
          # of regular expressions to match functions. The following will allow the
          # minion authenticated as foo.example.com to execute functions from the test
          # and pkg modules.
          #peer:
          #  foo.example.com:
          #    - test.*
          #    - pkg.*
          #
          # This will allow all minions to execute all commands:
          #peer:
          #  .*:
          #    - .*
          #
          # This is not recommended, since it would allow anyone who gets root on any
          # single minion to instantly have root on all of the minions!

          # Minions can also be allowed to execute runners from the salt master.
          # Since executing a runner from the minion could be considered a security risk,
          # it needs to be enabled. This setting functions just like the peer setting
          # except that it opens up runners instead of module functions.
          #
          # All peer runner support is turned off by default and must be enabled before
          # using. This will enable all peer runners for all minions:
          #peer_run:
          #  .*:
          #    - .*
          #
          # To enable just the manage.up runner for the minion foo.example.com:
          #peer_run:
          #  foo.example.com:
          #    - manage.up
          #
          #
          #####         Mine settings     #####
          #####################################
          # Restrict mine.get access from minions. By default any minion has a full access
          # to get all mine data from master cache. In acl definion below, only pcre matches
          # are allowed.
          # mine_get:
          #   .*:
          #     - .*
          #
          # The example below enables minion foo.example.com to get 'network.interfaces' mine
          # data only, minions web* to get all network.* and disk.* mine data and all other
          # minions won't get any mine data.
          # mine_get:
          #   foo.example.com:
          #     - network.interfaces
          #   web.*:
          #     - network.*
          #     - disk.*

          #####         Logging settings       #####
          ##########################################
          # The location of the master log file
          # The master log can be sent to a regular file, local path name, or network
          # location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
          # ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
          # format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
          #log_file: /var/log/salt/master
          #log_file: file:///dev/log
          #log_file: udp://loghost:10514

          #log_file: /var/log/salt/master
          #key_logfile: /var/log/salt/key

          # The level of messages to send to the console.
          # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
          #
          # The following log levels are considered INSECURE and may log sensitive data:
          # ['garbage', 'trace', 'debug']
          #
          #log_level: warning

          # The level of messages to send to the log file.
          # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
          # If using 'log_granular_levels' this must be set to the highest desired level.
          #log_level_logfile: warning

          # The date and time format used in log messages. Allowed date/time formatting
          # can be seen here: http://docs.python.org/library/time.html#time.strftime
          #log_datefmt: '%H:%M:%S'
          #log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'

          # The format of the console logging messages. Allowed formatting options can
          # be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
          #
          # Console log colors are specified by these additional formatters:
          #
          # %(colorlevel)s
          # %(colorname)s
          # %(colorprocess)s
          # %(colormsg)s
          #
          # Since it is desirable to include the surrounding brackets, '[' and ']', in
          # the coloring of the messages, these color formatters also include padding as
          # well.  Color LogRecord attributes are only available for console logging.
          #
          #log_fmt_console: '%(colorlevel)s %(colormsg)s'
          #log_fmt_console: '[%(levelname)-8s] %(message)s'
          #
          #log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s'

          # This can be used to control logging levels more specificically.  This
          # example sets the main salt library at the 'warning' level, but sets
          # 'salt.modules' to log at the 'debug' level:
          #   log_granular_levels:
          #     'salt': 'warning'
          #     'salt.modules': 'debug'
          #
          #log_granular_levels: {}

          #####         Node Groups           ######
          ##########################################
          # Node groups allow for logical groupings of minion nodes. A group consists of
          # a group name and a compound target. Nodgroups can reference other nodegroups
          # with 'N@' classifier. Ensure that you do not have circular references.
          #
          #nodegroups:
          #  group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com'
          #  group2: 'G@os:Debian and foo.domain.com'
          #  group3: 'G@os:Debian and N@group1'
          #  group4:
          #    - 'G@foo:bar'
          #    - 'or'
          #    - 'G@foo:baz'

          #####     Range Cluster settings     #####
          ##########################################
          # The range server (and optional port) that serves your cluster information
          # https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec
          #
          #range_server: range:80

          #####  Windows Software Repo settings #####
          ###########################################
          # Location of the repo on the master:
          #winrepo_dir_ng: '/srv/salt/win/repo-ng'
          #
          # List of git repositories to include with the local repo:
          #winrepo_remotes_ng:
          #  - 'https://github.com/saltstack/salt-winrepo-ng.git'

          #####  Windows Software Repo settings - Pre 2015.8 #####
          ########################################################
          # Legacy repo settings for pre-2015.8 Windows minions.
          #
          # Location of the repo on the master:
          #winrepo_dir: '/srv/salt/win/repo'
          #
          # Location of the master's repo cache file:
          #winrepo_mastercachefile: '/srv/salt/win/repo/winrepo.p'
          #
          # List of git repositories to include with the local repo:
          #winrepo_remotes:
          #  - 'https://github.com/saltstack/salt-winrepo.git'

          # The refspecs fetched by winrepo remotes
          #winrepo_refspecs:
          #  - '+refs/heads/*:refs/remotes/origin/*'
          #  - '+refs/tags/*:refs/tags/*'
          #

          #####      Returner settings          ######
          ############################################
          # Which returner(s) will be used for minion's result:
          #return: mysql

          ######    Miscellaneous  settings     ######
          ############################################
          # Default match type for filtering events tags: startswith, endswith, find, regex, fnmatch
          #event_match_type: startswith

          # Save runner returns to the job cache
          #runner_returns: True

          # Permanently include any available Python 3rd party modules into thin and minimal Salt
          # when they are generated for Salt-SSH or other purposes.
          # The modules should be named by the names they are actually imported inside the Python.
          # The value of the parameters can be either one module or a comma separated list of them.
          #thin_extra_mods: foo,bar
          #min_extra_mods: foo,bar,baz

          ######      Keepalive settings        ######
          ############################################
          # Warning: Failure to set TCP keepalives on the salt-master can result in
          # not detecting the loss of a minion when the connection is lost or when
          # it's host has been terminated without first closing the socket.
          # Salt's Presence System depends on this connection status to know if a minion
          # is "present".
          # ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by
          # the OS. If connections between the minion and the master pass through
          # a state tracking device such as a firewall or VPN gateway, there is
          # the risk that it could tear down the connection the master and minion
          # without informing either party that their connection has been taken away.
          # Enabling TCP Keepalives prevents this from happening.

          # Overall state of TCP Keepalives, enable (1 or True), disable (0 or False)
          # or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled.
          #tcp_keepalive: True

          # How long before the first keepalive should be sent in seconds. Default 300
          # to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds
          # on Linux see /proc/sys/net/ipv4/tcp_keepalive_time.
          #tcp_keepalive_idle: 300

          # How many lost probes are needed to consider the connection lost. Default -1
          # to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes.
          #tcp_keepalive_cnt: -1

          # How often, in seconds, to send keepalives after the first one. Default -1 to
          # use OS defaults, typically 75 seconds on Linux, see
          # /proc/sys/net/ipv4/tcp_keepalive_intvl.
          #tcp_keepalive_intvl: -1

   Example minion configuration file
          ##### Primary configuration settings #####
          ##########################################
          # This configuration file is used to manage the behavior of the Salt Minion.
          # With the exception of the location of the Salt Master Server, values that are
          # commented out but have an empty line after the comment are defaults that need
          # not be set in the config. If there is no blank line after the comment, the
          # value is presented as an example and is not the default.

          # Per default the minion will automatically include all config files
          # from minion.d/*.conf (minion.d is a directory in the same directory
          # as the main minion config file).
          #default_include: minion.d/*.conf

          # Set the location of the salt master server. If the master server cannot be
          # resolved, then the minion will fail to start.
          #master: salt

          # Set http proxy information for the minion when doing requests
          #proxy_host:
          #proxy_port:
          #proxy_username:
          #proxy_password:

          # If multiple masters are specified in the 'master' setting, the default behavior
          # is to always try to connect to them in the order they are listed. If random_master is
          # set to True, the order will be randomized instead. This can be helpful in distributing
          # the load of many minions executing salt-call requests, for example, from a cron job.
          # If only one master is listed, this setting is ignored and a warning will be logged.
          # NOTE: If master_type is set to failover, use master_shuffle instead.
          #random_master: False

          # Use if master_type is set to failover.
          #master_shuffle: False

          # Minions can connect to multiple masters simultaneously (all masters
          # are "hot"), or can be configured to failover if a master becomes
          # unavailable.  Multiple hot masters are configured by setting this
          # value to "str".  Failover masters can be requested by setting
          # to "failover".  MAKE SURE TO SET master_alive_interval if you are
          # using failover.
          # Setting master_type to 'disable' let's you have a running minion (with engines and
          # beacons) without a master connection
          # master_type: str

          # Poll interval in seconds for checking if the master is still there.  Only
          # respected if master_type above is "failover". To disable the interval entirely,
          # set the value to -1. (This may be necessary on machines which have high numbers
          # of TCP connections, such as load balancers.)
          # master_alive_interval: 30

          # If the minion is in multi-master mode and the master_type configuration option
          # is set to "failover", this setting can be set to "True" to force the minion
          # to fail back to the first master in the list if the first master is back online.
          #master_failback: False

          # If the minion is in multi-master mode, the "master_type" configuration is set to
          # "failover", and the "master_failback" option is enabled, the master failback
          # interval can be set to ping the top master with this interval, in seconds.
          #master_failback_interval: 0

          # Set whether the minion should connect to the master via IPv6:
          #ipv6: False

          # Set the number of seconds to wait before attempting to resolve
          # the master hostname if name resolution fails. Defaults to 30 seconds.
          # Set to zero if the minion should shutdown and not retry.
          # retry_dns: 30

          # Set the number of times to attempt to resolve
          # the master hostname if name resolution fails. Defaults to None,
          # which will attempt the resolution indefinitely.
          # retry_dns_count: 3

          # Set the port used by the master reply and authentication server.
          #master_port: 4506

          # The user to run salt.
          #user: root

          # The user to run salt remote execution commands as via sudo. If this option is
          # enabled then sudo will be used to change the active user executing the remote
          # command. If enabled the user will need to be allowed access via the sudoers
          # file for the user that the salt minion is configured to run as. The most
          # common option would be to use the root user. If this option is set the user
          # option should also be set to a non-root user. If migrating from a root minion
          # to a non root minion the minion cache should be cleared and the minion pki
          # directory will need to be changed to the ownership of the new user.
          #sudo_user: root

          # Specify the location of the daemon process ID file.
          #pidfile: /var/run/salt-minion.pid

          # The root directory prepended to these options: pki_dir, cachedir, log_file,
          # sock_dir, pidfile.
          #root_dir: /

          # The path to the minion's configuration file.
          #conf_file: /etc/salt/minion

          # The directory to store the pki information in
          #pki_dir: /etc/salt/pki/minion

          # Explicitly declare the id for this minion to use, if left commented the id
          # will be the hostname as returned by the python call: socket.getfqdn()
          # Since salt uses detached ids it is possible to run multiple minions on the
          # same machine but with different ids, this can be useful for salt compute
          # clusters.
          #id:

          # Cache the minion id to a file when the minion's id is not statically defined
          # in the minion config. Defaults to "True". This setting prevents potential
          # problems when automatic minion id resolution changes, which can cause the
          # minion to lose connection with the master. To turn off minion id caching,
          # set this config to ``False``.
          #minion_id_caching: True

          # Append a domain to a hostname in the event that it does not exist.  This is
          # useful for systems where socket.getfqdn() does not actually result in a
          # FQDN (for instance, Solaris).
          #append_domain:

          # Custom static grains for this minion can be specified here and used in SLS
          # files just like all other grains. This example sets 4 custom grains, with
          # the 'roles' grain having two values that can be matched against.
          #grains:
          #  roles:
          #    - webserver
          #    - memcache
          #  deployment: datacenter4
          #  cabinet: 13
          #  cab_u: 14-15
          #
          # Where cache data goes.
          # This data may contain sensitive data and should be protected accordingly.
          #cachedir: /var/cache/salt/minion

          # Append minion_id to these directories.  Helps with
          # multiple proxies and minions running on the same machine.
          # Allowed elements in the list: pki_dir, cachedir, extension_modules
          # Normally not needed unless running several proxies and/or minions on the same machine
          # Defaults to ['cachedir'] for proxies, [] (empty list) for regular minions
          #append_minionid_config_dirs:

          # Verify and set permissions on configuration directories at startup.
          #verify_env: True

          # The minion can locally cache the return data from jobs sent to it, this
          # can be a good way to keep track of jobs the minion has executed
          # (on the minion side). By default this feature is disabled, to enable, set
          # cache_jobs to True.
          #cache_jobs: False

          # Set the directory used to hold unix sockets.
          #sock_dir: /var/run/salt/minion

          # The minion can take a while to start up when lspci and/or dmidecode is used
          # to populate the grains for the minion. Set this to False if you do not need
          # GPU hardware grains for your minion.
          # enable_gpu_grains: True

          # Set the default outputter used by the salt-call command. The default is
          # "nested".
          #output: nested

          # To set a list of additional directories to search for salt outputters, set the
          # outputter_dirs option.
          #outputter_dirs: []

          # By default output is colored. To disable colored output, set the color value
          # to False.
          #color: True

          # Do not strip off the colored output from nested results and state outputs
          # (true by default).
          # strip_colors: False

          # Backup files that are replaced by file.managed and file.recurse under
          # 'cachedir'/file_backup relative to their original location and appended
          # with a timestamp. The only valid setting is "minion". Disabled by default.
          #
          # Alternatively this can be specified for each file in state files:
          # /etc/ssh/sshd_config:
          #   file.managed:
          #     - source: salt://ssh/sshd_config
          #     - backup: minion
          #
          #backup_mode: minion

          # When waiting for a master to accept the minion's public key, salt will
          # continuously attempt to reconnect until successful. This is the time, in
          # seconds, between those reconnection attempts.
          #acceptance_wait_time: 10

          # If this is nonzero, the time between reconnection attempts will increase by
          # acceptance_wait_time seconds per iteration, up to this maximum. If this is
          # set to zero, the time between reconnection attempts will stay constant.
          #acceptance_wait_time_max: 0

          # If the master rejects the minion's public key, retry instead of exiting.
          # Rejected keys will be handled the same as waiting on acceptance.
          #rejected_retry: False

          # When the master key changes, the minion will try to re-auth itself to receive
          # the new master key. In larger environments this can cause a SYN flood on the
          # master because all minions try to re-auth immediately. To prevent this and
          # have a minion wait for a random amount of time, use this optional parameter.
          # The wait-time will be a random number of seconds between 0 and the defined value.
          #random_reauth_delay: 60

          # To avoid overloading a master when many minions startup at once, a randomized
          # delay may be set to tell the minions to wait before connecting to the master.
          # This value is the number of seconds to choose from for a random number. For
          # example, setting this value to 60 will choose a random number of seconds to delay
          # on startup between zero seconds and sixty seconds. Setting to '0' will disable
          # this feature.
          #random_startup_delay: 0

          # When waiting for a master to accept the minion's public key, salt will
          # continuously attempt to reconnect until successful. This is the timeout value,
          # in seconds, for each individual attempt. After this timeout expires, the minion
          # will wait for acceptance_wait_time seconds before trying again. Unless your master
          # is under unusually heavy load, this should be left at the default.
          #auth_timeout: 60

          # Number of consecutive SaltReqTimeoutError that are acceptable when trying to
          # authenticate.
          #auth_tries: 7

          # The number of attempts to connect to a master before giving up.
          # Set this to -1 for unlimited attempts. This allows for a master to have
          # downtime and the minion to reconnect to it later when it comes back up.
          # In 'failover' mode, it is the number of attempts for each set of masters.
          # In this mode, it will cycle through the list of masters for each attempt.
          #
          # This is different than auth_tries because auth_tries attempts to
          # retry auth attempts with a single master. auth_tries is under the
          # assumption that you can connect to the master but not gain
          # authorization from it. master_tries will still cycle through all
          # the masters in a given try, so it is appropriate if you expect
          # occasional downtime from the master(s).
          #master_tries: 1

          # If authentication fails due to SaltReqTimeoutError during a ping_interval,
          # cause sub minion process to restart.
          #auth_safemode: False

          # Ping Master to ensure connection is alive (minutes).
          #ping_interval: 0

          # To auto recover minions if master changes IP address (DDNS)
          #    auth_tries: 10
          #    auth_safemode: False
          #    ping_interval: 2
          #
          # Minions won't know master is missing until a ping fails. After the ping fail,
          # the minion will attempt authentication and likely fails out and cause a restart.
          # When the minion restarts it will resolve the masters IP and attempt to reconnect.

          # If you don't have any problems with syn-floods, don't bother with the
          # three recon_* settings described below, just leave the defaults!
          #
          # The ZeroMQ pull-socket that binds to the masters publishing interface tries
          # to reconnect immediately, if the socket is disconnected (for example if
          # the master processes are restarted). In large setups this will have all
          # minions reconnect immediately which might flood the master (the ZeroMQ-default
          # is usually a 100ms delay). To prevent this, these three recon_* settings
          # can be used.
          # recon_default: the interval in milliseconds that the socket should wait before
          #                trying to reconnect to the master (1000ms = 1 second)
          #
          # recon_max: the maximum time a socket should wait. each interval the time to wait
          #            is calculated by doubling the previous time. if recon_max is reached,
          #            it starts again at recon_default. Short example:
          #
          #            reconnect 1: the socket will wait 'recon_default' milliseconds
          #            reconnect 2: 'recon_default' * 2
          #            reconnect 3: ('recon_default' * 2) * 2
          #            reconnect 4: value from previous interval * 2
          #            reconnect 5: value from previous interval * 2
          #            reconnect x: if value >= recon_max, it starts again with recon_default
          #
          # recon_randomize: generate a random wait time on minion start. The wait time will
          #                  be a random value between recon_default and recon_default +
          #                  recon_max. Having all minions reconnect with the same recon_default
          #                  and recon_max value kind of defeats the purpose of being able to
          #                  change these settings. If all minions have the same values and your
          #                  setup is quite large (several thousand minions), they will still
          #                  flood the master. The desired behavior is to have timeframe within
          #                  all minions try to reconnect.
          #
          # Example on how to use these settings. The goal: have all minions reconnect within a
          # 60 second timeframe on a disconnect.
          # recon_default: 1000
          # recon_max: 59000
          # recon_randomize: True
          #
          # Each minion will have a randomized reconnect value between 'recon_default'
          # and 'recon_default + recon_max', which in this example means between 1000ms
          # 60000ms (or between 1 and 60 seconds). The generated random-value will be
          # doubled after each attempt to reconnect. Lets say the generated random
          # value is 11 seconds (or 11000ms).
          # reconnect 1: wait 11 seconds
          # reconnect 2: wait 22 seconds
          # reconnect 3: wait 33 seconds
          # reconnect 4: wait 44 seconds
          # reconnect 5: wait 55 seconds
          # reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max)
          # reconnect 7: wait 11 seconds
          # reconnect 8: wait 22 seconds
          # reconnect 9: wait 33 seconds
          # reconnect x: etc.
          #
          # In a setup with ~6000 thousand hosts these settings would average the reconnects
          # to about 100 per second and all hosts would be reconnected within 60 seconds.
          # recon_default: 100
          # recon_max: 5000
          # recon_randomize: False
          #
          #
          # The loop_interval sets how long in seconds the minion will wait between
          # evaluating the scheduler and running cleanup tasks.  This defaults to 1
          # second on the minion scheduler.
          #loop_interval: 1

          # Some installations choose to start all job returns in a cache or a returner
          # and forgo sending the results back to a master. In this workflow, jobs
          # are most often executed with --async from the Salt CLI and then results
          # are evaluated by examining job caches on the minions or any configured returners.
          # WARNING: Setting this to False will **disable** returns back to the master.
          #pub_ret: True

          # The grains can be merged, instead of overridden, using this option.
          # This allows custom grains to defined different subvalues of a dictionary
          # grain. By default this feature is disabled, to enable set grains_deep_merge
          # to ``True``.
          #grains_deep_merge: False

          # The grains_refresh_every setting allows for a minion to periodically check
          # its grains to see if they have changed and, if so, to inform the master
          # of the new grains. This operation is moderately expensive, therefore
          # care should be taken not to set this value too low.
          #
          # Note: This value is expressed in __minutes__!
          #
          # A value of 10 minutes is a reasonable default.
          #
          # If the value is set to zero, this check is disabled.
          #grains_refresh_every: 1

          # Cache grains on the minion. Default is False.
          #grains_cache: False

          # Cache rendered pillar data on the minion. Default is False.
          # This may cause 'cachedir'/pillar to contain sensitive data that should be
          # protected accordingly.
          #minion_pillar_cache: False

          # Grains cache expiration, in seconds. If the cache file is older than this
          # number of seconds then the grains cache will be dumped and fully re-populated
          # with fresh data. Defaults to 5 minutes. Will have no effect if 'grains_cache'
          # is not enabled.
          # grains_cache_expiration: 300

          # Determines whether or not the salt minion should run scheduled mine updates.
          # Defaults to "True". Set to "False" to disable the scheduled mine updates
          # (this essentially just does not add the mine update function to the minion's
          # scheduler).
          #mine_enabled: True

          # Determines whether or not scheduled mine updates should be accompanied by a job
          # return for the job cache. Defaults to "False". Set to "True" to include job
          # returns in the job cache for mine updates.
          #mine_return_job: False

          # Example functions that can be run via the mine facility
          # NO mine functions are established by default.
          # Note these can be defined in the minion's pillar as well.
          #mine_functions:
          #  test.ping: []
          #  network.ip_addrs:
          #    interface: eth0
          #    cidr: '10.0.0.0/8'

          # The number of minutes between mine updates.
          #mine_interval: 60

          # Windows platforms lack posix IPC and must rely on slower TCP based inter-
          # process communications. Set ipc_mode to 'tcp' on such systems
          #ipc_mode: ipc

          # Overwrite the default tcp ports used by the minion when in tcp mode
          #tcp_pub_port: 4510
          #tcp_pull_port: 4511

          # Passing very large events can cause the minion to consume large amounts of
          # memory. This value tunes the maximum size of a message allowed onto the
          # minion event bus. The value is expressed in bytes.
          #max_event_size: 1048576

          # To detect failed master(s) and fire events on connect/disconnect, set
          # master_alive_interval to the number of seconds to poll the masters for
          # connection events.
          #
          #master_alive_interval: 30

          # The minion can include configuration from other files. To enable this,
          # pass a list of paths to this option. The paths can be either relative or
          # absolute; if relative, they are considered to be relative to the directory
          # the main minion configuration file lives in (this file). Paths can make use
          # of shell-style globbing. If no files are matched by a path passed to this
          # option then the minion will log a warning message.
          #
          # Include a config file from some other path:
          # include: /etc/salt/extra_config
          #
          # Include config from several files and directories:
          #include:
          #  - /etc/salt/extra_config
          #  - /etc/roles/webserver

          # The syndic minion can verify that it is talking to the correct master via the
          # key fingerprint of the higher-level master with the "syndic_finger" config.
          #syndic_finger: ''
          #
          #
          #
          #####   Minion module management     #####
          ##########################################
          # Disable specific modules. This allows the admin to limit the level of
          # access the master has to the minion.  The default here is the empty list,
          # below is an example of how this needs to be formatted in the config file
          #disable_modules:
          #  - cmdmod
          #  - test
          #disable_returners: []

          # This is the reverse of disable_modules.  The default, like disable_modules, is the empty list,
          # but if this option is set to *anything* then *only* those modules will load.
          # Note that this is a very large hammer and it can be quite difficult to keep the minion working
          # the way you think it should since Salt uses many modules internally itself.  At a bare minimum
          # you need the following enabled or else the minion won't start.
          #whitelist_modules:
          #  - cmdmod
          #  - test
          #  - config

          # Modules can be loaded from arbitrary paths. This enables the easy deployment
          # of third party modules. Modules for returners and minions can be loaded.
          # Specify a list of extra directories to search for minion modules and
          # returners. These paths must be fully qualified!
          #module_dirs: []
          #returner_dirs: []
          #states_dirs: []
          #render_dirs: []
          #utils_dirs: []
          #
          # A module provider can be statically overwritten or extended for the minion
          # via the providers option, in this case the default module will be
          # overwritten by the specified module. In this example the pkg module will
          # be provided by the yumpkg5 module instead of the system default.
          #providers:
          #  pkg: yumpkg5
          #
          # Enable Cython modules searching and loading. (Default: False)
          #cython_enable: False
          #
          # Specify a max size (in bytes) for modules on import. This feature is currently
          # only supported on *nix operating systems and requires psutil.
          # modules_max_memory: -1

          #####    State Management Settings    #####
          ###########################################
          # The state management system executes all of the state templates on the minion
          # to enable more granular control of system state management. The type of
          # template and serialization used for state management needs to be configured
          # on the minion, the default renderer is yaml_jinja. This is a yaml file
          # rendered from a jinja template, the available options are:
          # yaml_jinja
          # yaml_mako
          # yaml_wempy
          # json_jinja
          # json_mako
          # json_wempy
          #
          #renderer: yaml_jinja
          #
          # The failhard option tells the minions to stop immediately after the first
          # failure detected in the state execution. Defaults to False.
          #failhard: False
          #
          # Reload the modules prior to a highstate run.
          #autoload_dynamic_modules: True
          #
          # clean_dynamic_modules keeps the dynamic modules on the minion in sync with
          # the dynamic modules on the master, this means that if a dynamic module is
          # not on the master it will be deleted from the minion. By default, this is
          # enabled and can be disabled by changing this value to False.
          #clean_dynamic_modules: True
          #
          # Normally, the minion is not isolated to any single environment on the master
          # when running states, but the environment can be isolated on the minion side
          # by statically setting it. Remember that the recommended way to manage
          # environments is to isolate via the top file.
          #environment: None
          #
          # Isolates the pillar environment on the minion side. This functions the same
          # as the environment setting, but for pillar instead of states.
          #pillarenv: None
          #
          # Set this option to True to force the pillarenv to be the same as the
          # effective saltenv when running states. Note that if pillarenv is specified,
          # this option will be ignored.
          #pillarenv_from_saltenv: False
          #
          # Set this option to 'True' to force a 'KeyError' to be raised whenever an
          # attempt to retrieve a named value from pillar fails. When this option is set
          # to 'False', the failed attempt returns an empty string. Default is 'False'.
          #pillar_raise_on_missing: False
          #
          # If using the local file directory, then the state top file name needs to be
          # defined, by default this is top.sls.
          #state_top: top.sls
          #
          # Run states when the minion daemon starts. To enable, set startup_states to:
          # 'highstate' -- Execute state.highstate
          # 'sls' -- Read in the sls_list option and execute the named sls files
          # 'top' -- Read top_file option and execute based on that file on the Master
          #startup_states: ''
          #
          # List of states to run when the minion starts up if startup_states is 'sls':
          #sls_list:
          #  - edit.vim
          #  - hyper
          #
          # Top file to execute if startup_states is 'top':
          #top_file: ''

          # Automatically aggregate all states that have support for mod_aggregate by
          # setting to True. Or pass a list of state module names to automatically
          # aggregate just those types.
          #
          # state_aggregate:
          #   - pkg
          #
          #state_aggregate: False

          #####     File Directory Settings    #####
          ##########################################
          # The Salt Minion can redirect all file server operations to a local directory,
          # this allows for the same state tree that is on the master to be used if
          # copied completely onto the minion. This is a literal copy of the settings on
          # the master but used to reference a local directory on the minion.

          # Set the file client. The client defaults to looking on the master server for
          # files, but can be directed to look at the local file directory setting
          # defined below by setting it to "local". Setting a local file_client runs the
          # minion in masterless mode.
          #file_client: remote

          # The file directory works on environments passed to the minion, each environment
          # can have multiple root directories, the subdirectories in the multiple file
          # roots cannot match, otherwise the downloaded files will not be able to be
          # reliably ensured. A base environment is required to house the top file.
          # Example:
          # file_roots:
          #   base:
          #     - /srv/salt/
          #   dev:
          #     - /srv/salt/dev/services
          #     - /srv/salt/dev/states
          #   prod:
          #     - /srv/salt/prod/services
          #     - /srv/salt/prod/states
          #
          #file_roots:
          #  base:
          #    - /srv/salt

          # Uncomment the line below if you do not want the file_server to follow
          # symlinks when walking the filesystem tree. This is set to True
          # by default. Currently this only applies to the default roots
          # fileserver_backend.
          #fileserver_followsymlinks: False
          #
          # Uncomment the line below if you do not want symlinks to be
          # treated as the files they are pointing to. By default this is set to
          # False. By uncommenting the line below, any detected symlink while listing
          # files on the Master will not be returned to the Minion.
          #fileserver_ignoresymlinks: True
          #
          # By default, the Salt fileserver recurses fully into all defined environments
          # to attempt to find files. To limit this behavior so that the fileserver only
          # traverses directories with SLS files and special Salt directories like _modules,
          # enable the option below. This might be useful for installations where a file root
          # has a very large number of files and performance is negatively impacted. Default
          # is False.
          #fileserver_limit_traversal: False

          # The hash_type is the hash to use when discovering the hash of a file on
          # the local fileserver. The default is sha256, but md5, sha1, sha224, sha384
          # and sha512 are also supported.
          #
          # WARNING: While md5 and sha1 are also supported, do not use them due to the
          # high chance of possible collisions and thus security breach.
          #
          # Warning: Prior to changing this value, the minion should be stopped and all
          # Salt caches should be cleared.
          #hash_type: sha256

          # The Salt pillar is searched for locally if file_client is set to local. If
          # this is the case, and pillar data is defined, then the pillar_roots need to
          # also be configured on the minion:
          #pillar_roots:
          #  base:
          #    - /srv/pillar

          # Set a hard-limit on the size of the files that can be pushed to the master.
          # It will be interpreted as megabytes. Default: 100
          #file_recv_max_size: 100
          #
          #
          ######        Security settings       #####
          ###########################################
          # Enable "open mode", this mode still maintains encryption, but turns off
          # authentication, this is only intended for highly secure environments or for
          # the situation where your keys end up in a bad state. If you run in open mode
          # you do so at your own risk!
          #open_mode: False

          # The size of key that should be generated when creating new keys.
          #keysize: 2048

          # Enable permissive access to the salt keys.  This allows you to run the
          # master or minion as root, but have a non-root group be given access to
          # your pki_dir.  To make the access explicit, root must belong to the group
          # you've given access to. This is potentially quite insecure.
          #permissive_pki_access: False

          # The state_verbose and state_output settings can be used to change the way
          # state system data is printed to the display. By default all data is printed.
          # The state_verbose setting can be set to True or False, when set to False
          # all data that has a result of True and no changes will be suppressed.
          #state_verbose: True

          # The state_output setting controls which results will be output full multi line
          # full, terse - each state will be full/terse
          # mixed - only states with errors will be full
          # changes - states with changes and errors will be full
          # full_id, mixed_id, changes_id and terse_id are also allowed;
          # when set, the state ID will be used as name in the output
          #state_output: full

          # The state_output_diff setting changes whether or not the output from
          # successful states is returned. Useful when even the terse output of these
          # states is cluttering the logs. Set it to True to ignore them.
          #state_output_diff: False

          # The state_output_profile setting changes whether profile information
          # will be shown for each state run.
          #state_output_profile: True

          # Fingerprint of the master public key to validate the identity of your Salt master
          # before the initial key exchange. The master fingerprint can be found by running
          # "salt-key -f master.pub" on the Salt master.
          #master_finger: ''

          # Use TLS/SSL encrypted connection between master and minion.
          # Can be set to a dictionary containing keyword arguments corresponding to Python's
          # 'ssl.wrap_socket' method.
          # Default is None.
          #ssl:
          #    keyfile: <path_to_keyfile>
          #    certfile: <path_to_certfile>
          #    ssl_version: PROTOCOL_TLSv1_2

          # Grains to be sent to the master on authentication to check if the minion's key
          # will be accepted automatically. Needs to be configured on the master.
          #autosign_grains:
          #  - uuid
          #  - server_id

          ######        Reactor Settings        #####
          ###########################################
          # Define a salt reactor. See https://docs.saltstack.com/en/latest/topics/reactor/
          #reactor: []

          #Set the TTL for the cache of the reactor configuration.
          #reactor_refresh_interval: 60

          #Configure the number of workers for the runner/wheel in the reactor.
          #reactor_worker_threads: 10

          #Define the queue size for workers in the reactor.
          #reactor_worker_hwm: 10000

          ######         Thread settings        #####
          ###########################################
          # Disable multiprocessing support, by default when a minion receives a
          # publication a new process is spawned and the command is executed therein.
          #
          # WARNING: Disabling multiprocessing may result in substantial slowdowns
          # when processing large pillars. See https://github.com/saltstack/salt/issues/38758
          # for a full explanation.
          #multiprocessing: True

          # Limit the maximum amount of processes or threads created by salt-minion.
          # This is useful to avoid resource exhaustion in case the minion receives more
          # publications than it is able to handle, as it limits the number of spawned
          # processes or threads. -1 is the default and disables the limit.
          #process_count_max: -1

          #####         Logging settings       #####
          ##########################################
          # The location of the minion log file
          # The minion log can be sent to a regular file, local path name, or network
          # location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
          # ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
          # format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
          #log_file: /var/log/salt/minion
          #log_file: file:///dev/log
          #log_file: udp://loghost:10514
          #
          #log_file: /var/log/salt/minion
          #key_logfile: /var/log/salt/key

          # The level of messages to send to the console.
          # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
          #
          # The following log levels are considered INSECURE and may log sensitive data:
          # ['garbage', 'trace', 'debug']
          #
          # Default: 'warning'
          #log_level: warning

          # The level of messages to send to the log file.
          # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
          # If using 'log_granular_levels' this must be set to the highest desired level.
          # Default: 'warning'
          #log_level_logfile:

          # The date and time format used in log messages. Allowed date/time formatting
          # can be seen here: http://docs.python.org/library/time.html#time.strftime
          #log_datefmt: '%H:%M:%S'
          #log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'

          # The format of the console logging messages. Allowed formatting options can
          # be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
          #
          # Console log colors are specified by these additional formatters:
          #
          # %(colorlevel)s
          # %(colorname)s
          # %(colorprocess)s
          # %(colormsg)s
          #
          # Since it is desirable to include the surrounding brackets, '[' and ']', in
          # the coloring of the messages, these color formatters also include padding as
          # well.  Color LogRecord attributes are only available for console logging.
          #
          #log_fmt_console: '%(colorlevel)s %(colormsg)s'
          #log_fmt_console: '[%(levelname)-8s] %(message)s'
          #
          #log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s'

          # This can be used to control logging levels more specificically.  This
          # example sets the main salt library at the 'warning' level, but sets
          # 'salt.modules' to log at the 'debug' level:
          #   log_granular_levels:
          #     'salt': 'warning'
          #     'salt.modules': 'debug'
          #
          #log_granular_levels: {}

          # To diagnose issues with minions disconnecting or missing returns, ZeroMQ
          # supports the use of monitor sockets to log connection events. This
          # feature requires ZeroMQ 4.0 or higher.
          #
          # To enable ZeroMQ monitor sockets, set 'zmq_monitor' to 'True' and log at a
          # debug level or higher.
          #
          # A sample log event is as follows:
          #
          # [DEBUG   ] ZeroMQ event: {'endpoint': 'tcp://127.0.0.1:4505', 'event': 512,
          # 'value': 27, 'description': 'EVENT_DISCONNECTED'}
          #
          # All events logged will include the string 'ZeroMQ event'. A connection event
          # should be logged as the minion starts up and initially connects to the
          # master. If not, check for debug log level and that the necessary version of
          # ZeroMQ is installed.
          #
          #zmq_monitor: False

          # Number of times to try to authenticate with the salt master when reconnecting
          # to the master
          #tcp_authentication_retries: 5

          ######      Module configuration      #####
          ###########################################
          # Salt allows for modules to be passed arbitrary configuration data, any data
          # passed here in valid yaml format will be passed on to the salt minion modules
          # for use. It is STRONGLY recommended that a naming convention be used in which
          # the module name is followed by a . and then the value. Also, all top level
          # data must be applied via the yaml dict construct, some examples:
          #
          # You can specify that all modules should run in test mode:
          #test: True
          #
          # A simple value for the test module:
          #test.foo: foo
          #
          # A list for the test module:
          #test.bar: [baz,quo]
          #
          # A dict for the test module:
          #test.baz: {spam: sausage, cheese: bread}
          #
          #
          ######      Update settings          ######
          ###########################################
          # Using the features in Esky, a salt minion can both run as a frozen app and
          # be updated on the fly. These options control how the update process
          # (saltutil.update()) behaves.
          #
          # The url for finding and downloading updates. Disabled by default.
          #update_url: False
          #
          # The list of services to restart after a successful update. Empty by default.
          #update_restart_services: []

          ######      Keepalive settings        ######
          ############################################
          # ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by
          # the OS. If connections between the minion and the master pass through
          # a state tracking device such as a firewall or VPN gateway, there is
          # the risk that it could tear down the connection the master and minion
          # without informing either party that their connection has been taken away.
          # Enabling TCP Keepalives prevents this from happening.

          # Overall state of TCP Keepalives, enable (1 or True), disable (0 or False)
          # or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled.
          #tcp_keepalive: True

          # How long before the first keepalive should be sent in seconds. Default 300
          # to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds
          # on Linux see /proc/sys/net/ipv4/tcp_keepalive_time.
          #tcp_keepalive_idle: 300

          # How many lost probes are needed to consider the connection lost. Default -1
          # to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes.
          #tcp_keepalive_cnt: -1

          # How often, in seconds, to send keepalives after the first one. Default -1 to
          # use OS defaults, typically 75 seconds on Linux, see
          # /proc/sys/net/ipv4/tcp_keepalive_intvl.
          #tcp_keepalive_intvl: -1

          ######   Windows Software settings    ######
          ############################################
          # Location of the repository cache file on the master:
          #win_repo_cachefile: 'salt://win/repo/winrepo.p'

          ######      Returner  settings        ######
          ############################################
          # Default Minion returners. Can be a comma delimited string or a list:
          #
          #return: mysql
          #
          #return: mysql,slack,redis
          #
          #return:
          #  - mysql
          #  - hipchat
          #  - slack

          ######    Miscellaneous  settings     ######
          ############################################
          # Default match type for filtering events tags: startswith, endswith, find, regex, fnmatch
          #event_match_type: startswith

   Example proxy minion configuration file
          ##### Primary configuration settings #####
          ##########################################
          # This configuration file is used to manage the behavior of all Salt Proxy
          # Minions on this host.
          # With the exception of the location of the Salt Master Server, values that are
          # commented out but have an empty line after the comment are defaults that need
          # not be set in the config. If there is no blank line after the comment, the
          # value is presented as an example and is not the default.

          # Per default the minion will automatically include all config files
          # from minion.d/*.conf (minion.d is a directory in the same directory
          # as the main minion config file).
          #default_include: minion.d/*.conf

          # Backwards compatibility option for proxymodules created before 2015.8.2
          # This setting will default to 'False' in the 2016.3.0 release
          # Setting this to True adds proxymodules to the __opts__ dictionary.
          # This breaks several Salt features (basically anything that serializes
          # __opts__ over the wire) but retains backwards compatibility.
          #add_proxymodule_to_opts: True

          # Set the location of the salt master server. If the master server cannot be
          # resolved, then the minion will fail to start.
          #master: salt

          # If a proxymodule has a function called 'grains', then call it during
          # regular grains loading and merge the results with the proxy's grains
          # dictionary.  Otherwise it is assumed that the module calls the grains
          # function in a custom way and returns the data elsewhere
          #
          # Default to False for 2016.3 and 2016.11. Switch to True for 2017.7.0.
          # proxy_merge_grains_in_module: True

          # If a proxymodule has a function called 'alive' returning a boolean
          # flag reflecting the state of the connection with the remove device,
          # when this option is set as True, a scheduled job on the proxy will
          # try restarting the connection. The polling frequency depends on the
          # next option, 'proxy_keep_alive_interval'. Added in 2017.7.0.
          # proxy_keep_alive: True

          # The polling interval (in minutes) to check if the underlying connection
          # with the remote device is still alive. This option requires
          # 'proxy_keep_alive' to be configured as True and the proxymodule to
          # implement the 'alive' function. Added in 2017.7.0.
          # proxy_keep_alive_interval: 1

          # By default, any proxy opens the connection with the remote device when
          # initialized. Some proxymodules allow through this option to open/close
          # the session per command. This requires the proxymodule to have this
          # capability. Please consult the documentation to see if the proxy type
          # used can be that flexible. Added in 2017.7.0.
          # proxy_always_alive: True

          # If multiple masters are specified in the 'master' setting, the default behavior
          # is to always try to connect to them in the order they are listed. If random_master is
          # set to True, the order will be randomized instead. This can be helpful in distributing
          # the load of many minions executing salt-call requests, for example, from a cron job.
          # If only one master is listed, this setting is ignored and a warning will be logged.
          #random_master: False

          # Minions can connect to multiple masters simultaneously (all masters
          # are "hot"), or can be configured to failover if a master becomes
          # unavailable.  Multiple hot masters are configured by setting this
          # value to "str".  Failover masters can be requested by setting
          # to "failover".  MAKE SURE TO SET master_alive_interval if you are
          # using failover.
          # master_type: str

          # Poll interval in seconds for checking if the master is still there.  Only
          # respected if master_type above is "failover".
          # master_alive_interval: 30

          # Set whether the minion should connect to the master via IPv6:
          #ipv6: False

          # Set the number of seconds to wait before attempting to resolve
          # the master hostname if name resolution fails. Defaults to 30 seconds.
          # Set to zero if the minion should shutdown and not retry.
          # retry_dns: 30

          # Set the port used by the master reply and authentication server.
          #master_port: 4506

          # The user to run salt.
          #user: root

          # Setting sudo_user will cause salt to run all execution modules under an sudo
          # to the user given in sudo_user.  The user under which the salt minion process
          # itself runs will still be that provided in the user config above, but all
          # execution modules run by the minion will be rerouted through sudo.
          #sudo_user: saltdev

          # Specify the location of the daemon process ID file.
          #pidfile: /var/run/salt-minion.pid

          # The root directory prepended to these options: pki_dir, cachedir, log_file,
          # sock_dir, pidfile.
          #root_dir: /

          # The directory to store the pki information in
          #pki_dir: /etc/salt/pki/minion

          # Where cache data goes.
          # This data may contain sensitive data and should be protected accordingly.
          #cachedir: /var/cache/salt/minion

          # Append minion_id to these directories.  Helps with
          # multiple proxies and minions running on the same machine.
          # Allowed elements in the list: pki_dir, cachedir, extension_modules
          # Normally not needed unless running several proxies and/or minions on the same machine
          # Defaults to ['cachedir'] for proxies, [] (empty list) for regular minions
          # append_minionid_config_dirs:
          #   - cachedir

          # Verify and set permissions on configuration directories at startup.
          #verify_env: True

          # The minion can locally cache the return data from jobs sent to it, this
          # can be a good way to keep track of jobs the minion has executed
          # (on the minion side). By default this feature is disabled, to enable, set
          # cache_jobs to True.
          #cache_jobs: False

          # Set the directory used to hold unix sockets.
          #sock_dir: /var/run/salt/minion

          # Set the default outputter used by the salt-call command. The default is
          # "nested".
          #output: nested
          #
          # By default output is colored. To disable colored output, set the color value
          # to False.
          #color: True

          # Do not strip off the colored output from nested results and state outputs
          # (true by default).
          # strip_colors: False

          # Backup files that are replaced by file.managed and file.recurse under
          # 'cachedir'/file_backup relative to their original location and appended
          # with a timestamp. The only valid setting is "minion". Disabled by default.
          #
          # Alternatively this can be specified for each file in state files:
          # /etc/ssh/sshd_config:
          #   file.managed:
          #     - source: salt://ssh/sshd_config
          #     - backup: minion
          #
          #backup_mode: minion

          # When waiting for a master to accept the minion's public key, salt will
          # continuously attempt to reconnect until successful. This is the time, in
          # seconds, between those reconnection attempts.
          #acceptance_wait_time: 10

          # If this is nonzero, the time between reconnection attempts will increase by
          # acceptance_wait_time seconds per iteration, up to this maximum. If this is
          # set to zero, the time between reconnection attempts will stay constant.
          #acceptance_wait_time_max: 0

          # If the master rejects the minion's public key, retry instead of exiting.
          # Rejected keys will be handled the same as waiting on acceptance.
          #rejected_retry: False

          # When the master key changes, the minion will try to re-auth itself to receive
          # the new master key. In larger environments this can cause a SYN flood on the
          # master because all minions try to re-auth immediately. To prevent this and
          # have a minion wait for a random amount of time, use this optional parameter.
          # The wait-time will be a random number of seconds between 0 and the defined value.
          #random_reauth_delay: 60

          # When waiting for a master to accept the minion's public key, salt will
          # continuously attempt to reconnect until successful. This is the timeout value,
          # in seconds, for each individual attempt. After this timeout expires, the minion
          # will wait for acceptance_wait_time seconds before trying again. Unless your master
          # is under unusually heavy load, this should be left at the default.
          #auth_timeout: 60

          # Number of consecutive SaltReqTimeoutError that are acceptable when trying to
          # authenticate.
          #auth_tries: 7

          # If authentication fails due to SaltReqTimeoutError during a ping_interval,
          # cause sub minion process to restart.
          #auth_safemode: False

          # Ping Master to ensure connection is alive (minutes).
          #ping_interval: 0

          # To auto recover minions if master changes IP address (DDNS)
          #    auth_tries: 10
          #    auth_safemode: False
          #    ping_interval: 90
          #
          # Minions won't know master is missing until a ping fails. After the ping fail,
          # the minion will attempt authentication and likely fails out and cause a restart.
          # When the minion restarts it will resolve the masters IP and attempt to reconnect.

          # If you don't have any problems with syn-floods, don't bother with the
          # three recon_* settings described below, just leave the defaults!
          #
          # The ZeroMQ pull-socket that binds to the masters publishing interface tries
          # to reconnect immediately, if the socket is disconnected (for example if
          # the master processes are restarted). In large setups this will have all
          # minions reconnect immediately which might flood the master (the ZeroMQ-default
          # is usually a 100ms delay). To prevent this, these three recon_* settings
          # can be used.
          # recon_default: the interval in milliseconds that the socket should wait before
          #                trying to reconnect to the master (1000ms = 1 second)
          #
          # recon_max: the maximum time a socket should wait. each interval the time to wait
          #            is calculated by doubling the previous time. if recon_max is reached,
          #            it starts again at recon_default. Short example:
          #
          #            reconnect 1: the socket will wait 'recon_default' milliseconds
          #            reconnect 2: 'recon_default' * 2
          #            reconnect 3: ('recon_default' * 2) * 2
          #            reconnect 4: value from previous interval * 2
          #            reconnect 5: value from previous interval * 2
          #            reconnect x: if value >= recon_max, it starts again with recon_default
          #
          # recon_randomize: generate a random wait time on minion start. The wait time will
          #                  be a random value between recon_default and recon_default +
          #                  recon_max. Having all minions reconnect with the same recon_default
          #                  and recon_max value kind of defeats the purpose of being able to
          #                  change these settings. If all minions have the same values and your
          #                  setup is quite large (several thousand minions), they will still
          #                  flood the master. The desired behavior is to have timeframe within
          #                  all minions try to reconnect.
          #
          # Example on how to use these settings. The goal: have all minions reconnect within a
          # 60 second timeframe on a disconnect.
          # recon_default: 1000
          # recon_max: 59000
          # recon_randomize: True
          #
          # Each minion will have a randomized reconnect value between 'recon_default'
          # and 'recon_default + recon_max', which in this example means between 1000ms
          # 60000ms (or between 1 and 60 seconds). The generated random-value will be
          # doubled after each attempt to reconnect. Lets say the generated random
          # value is 11 seconds (or 11000ms).
          # reconnect 1: wait 11 seconds
          # reconnect 2: wait 22 seconds
          # reconnect 3: wait 33 seconds
          # reconnect 4: wait 44 seconds
          # reconnect 5: wait 55 seconds
          # reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max)
          # reconnect 7: wait 11 seconds
          # reconnect 8: wait 22 seconds
          # reconnect 9: wait 33 seconds
          # reconnect x: etc.
          #
          # In a setup with ~6000 thousand hosts these settings would average the reconnects
          # to about 100 per second and all hosts would be reconnected within 60 seconds.
          # recon_default: 100
          # recon_max: 5000
          # recon_randomize: False
          #
          #
          # The loop_interval sets how long in seconds the minion will wait between
          # evaluating the scheduler and running cleanup tasks. This defaults to a
          # sane 60 seconds, but if the minion scheduler needs to be evaluated more
          # often lower this value
          #loop_interval: 60

          # The grains_refresh_every setting allows for a minion to periodically check
          # its grains to see if they have changed and, if so, to inform the master
          # of the new grains. This operation is moderately expensive, therefore
          # care should be taken not to set this value too low.
          #
          # Note: This value is expressed in __minutes__!
          #
          # A value of 10 minutes is a reasonable default.
          #
          # If the value is set to zero, this check is disabled.
          #grains_refresh_every: 1

          # Cache grains on the minion. Default is False.
          #grains_cache: False

          # Grains cache expiration, in seconds. If the cache file is older than this
          # number of seconds then the grains cache will be dumped and fully re-populated
          # with fresh data. Defaults to 5 minutes. Will have no effect if 'grains_cache'
          # is not enabled.
          # grains_cache_expiration: 300

          # Windows platforms lack posix IPC and must rely on slower TCP based inter-
          # process communications. Set ipc_mode to 'tcp' on such systems
          #ipc_mode: ipc

          # Overwrite the default tcp ports used by the minion when in tcp mode
          #tcp_pub_port: 4510
          #tcp_pull_port: 4511

          # Passing very large events can cause the minion to consume large amounts of
          # memory. This value tunes the maximum size of a message allowed onto the
          # minion event bus. The value is expressed in bytes.
          #max_event_size: 1048576

          # To detect failed master(s) and fire events on connect/disconnect, set
          # master_alive_interval to the number of seconds to poll the masters for
          # connection events.
          #
          #master_alive_interval: 30

          # The minion can include configuration from other files. To enable this,
          # pass a list of paths to this option. The paths can be either relative or
          # absolute; if relative, they are considered to be relative to the directory
          # the main minion configuration file lives in (this file). Paths can make use
          # of shell-style globbing. If no files are matched by a path passed to this
          # option then the minion will log a warning message.
          #
          # Include a config file from some other path:
          # include: /etc/salt/extra_config
          #
          # Include config from several files and directories:
          #include:
          #  - /etc/salt/extra_config
          #  - /etc/roles/webserver
          #
          #
          #
          #####   Minion module management     #####
          ##########################################
          # Disable specific modules. This allows the admin to limit the level of
          # access the master has to the minion.
          #disable_modules: [cmd,test]
          #disable_returners: []
          #
          # Modules can be loaded from arbitrary paths. This enables the easy deployment
          # of third party modules. Modules for returners and minions can be loaded.
          # Specify a list of extra directories to search for minion modules and
          # returners. These paths must be fully qualified!
          #module_dirs: []
          #returner_dirs: []
          #states_dirs: []
          #render_dirs: []
          #utils_dirs: []
          #
          # A module provider can be statically overwritten or extended for the minion
          # via the providers option, in this case the default module will be
          # overwritten by the specified module. In this example the pkg module will
          # be provided by the yumpkg5 module instead of the system default.
          #providers:
          #  pkg: yumpkg5
          #
          # Enable Cython modules searching and loading. (Default: False)
          #cython_enable: False
          #
          # Specify a max size (in bytes) for modules on import. This feature is currently
          # only supported on *nix operating systems and requires psutil.
          # modules_max_memory: -1

          #####    State Management Settings    #####
          ###########################################
          # The state management system executes all of the state templates on the minion
          # to enable more granular control of system state management. The type of
          # template and serialization used for state management needs to be configured
          # on the minion, the default renderer is yaml_jinja. This is a yaml file
          # rendered from a jinja template, the available options are:
          # yaml_jinja
          # yaml_mako
          # yaml_wempy
          # json_jinja
          # json_mako
          # json_wempy
          #
          #renderer: yaml_jinja
          #
          # The failhard option tells the minions to stop immediately after the first
          # failure detected in the state execution. Defaults to False.
          #failhard: False
          #
          # Reload the modules prior to a highstate run.
          #autoload_dynamic_modules: True
          #
          # clean_dynamic_modules keeps the dynamic modules on the minion in sync with
          # the dynamic modules on the master, this means that if a dynamic module is
          # not on the master it will be deleted from the minion. By default, this is
          # enabled and can be disabled by changing this value to False.
          #clean_dynamic_modules: True
          #
          # Normally, the minion is not isolated to any single environment on the master
          # when running states, but the environment can be isolated on the minion side
          # by statically setting it. Remember that the recommended way to manage
          # environments is to isolate via the top file.
          #environment: None
          #
          # If using the local file directory, then the state top file name needs to be
          # defined, by default this is top.sls.
          #state_top: top.sls
          #
          # Run states when the minion daemon starts. To enable, set startup_states to:
          # 'highstate' -- Execute state.highstate
          # 'sls' -- Read in the sls_list option and execute the named sls files
          # 'top' -- Read top_file option and execute based on that file on the Master
          #startup_states: ''
          #
          # List of states to run when the minion starts up if startup_states is 'sls':
          #sls_list:
          #  - edit.vim
          #  - hyper
          #
          # Top file to execute if startup_states is 'top':
          #top_file: ''

          # Automatically aggregate all states that have support for mod_aggregate by
          # setting to True. Or pass a list of state module names to automatically
          # aggregate just those types.
          #
          # state_aggregate:
          #   - pkg
          #
          #state_aggregate: False

          #####     File Directory Settings    #####
          ##########################################
          # The Salt Minion can redirect all file server operations to a local directory,
          # this allows for the same state tree that is on the master to be used if
          # copied completely onto the minion. This is a literal copy of the settings on
          # the master but used to reference a local directory on the minion.

          # Set the file client. The client defaults to looking on the master server for
          # files, but can be directed to look at the local file directory setting
          # defined below by setting it to "local". Setting a local file_client runs the
          # minion in masterless mode.
          #file_client: remote

          # The file directory works on environments passed to the minion, each environment
          # can have multiple root directories, the subdirectories in the multiple file
          # roots cannot match, otherwise the downloaded files will not be able to be
          # reliably ensured. A base environment is required to house the top file.
          # Example:
          # file_roots:
          #   base:
          #     - /srv/salt/
          #   dev:
          #     - /srv/salt/dev/services
          #     - /srv/salt/dev/states
          #   prod:
          #     - /srv/salt/prod/services
          #     - /srv/salt/prod/states
          #
          #file_roots:
          #  base:
          #    - /srv/salt

          # By default, the Salt fileserver recurses fully into all defined environments
          # to attempt to find files. To limit this behavior so that the fileserver only
          # traverses directories with SLS files and special Salt directories like _modules,
          # enable the option below. This might be useful for installations where a file root
          # has a very large number of files and performance is negatively impacted. Default
          # is False.
          #fileserver_limit_traversal: False

          # The hash_type is the hash to use when discovering the hash of a file in
          # the local fileserver. The default is sha256 but sha224, sha384 and sha512
          # are also supported.
          #
          # WARNING: While md5 and sha1 are also supported, do not use it due to the high chance
          # of possible collisions and thus security breach.
          #
          # WARNING: While md5 is also supported, do not use it due to the high chance
          # of possible collisions and thus security breach.
          #
          # Warning: Prior to changing this value, the minion should be stopped and all
          # Salt caches should be cleared.
          #hash_type: sha256

          # The Salt pillar is searched for locally if file_client is set to local. If
          # this is the case, and pillar data is defined, then the pillar_roots need to
          # also be configured on the minion:
          #pillar_roots:
          #  base:
          #    - /srv/pillar
          #
          #
          ######        Security settings       #####
          ###########################################
          # Enable "open mode", this mode still maintains encryption, but turns off
          # authentication, this is only intended for highly secure environments or for
          # the situation where your keys end up in a bad state. If you run in open mode
          # you do so at your own risk!
          #open_mode: False

          # Enable permissive access to the salt keys.  This allows you to run the
          # master or minion as root, but have a non-root group be given access to
          # your pki_dir.  To make the access explicit, root must belong to the group
          # you've given access to. This is potentially quite insecure.
          #permissive_pki_access: False

          # The state_verbose and state_output settings can be used to change the way
          # state system data is printed to the display. By default all data is printed.
          # The state_verbose setting can be set to True or False, when set to False
          # all data that has a result of True and no changes will be suppressed.
          #state_verbose: True

          # The state_output setting controls which results will be output full multi line
          # full, terse - each state will be full/terse
          # mixed - only states with errors will be full
          # changes - states with changes and errors will be full
          # full_id, mixed_id, changes_id and terse_id are also allowed;
          # when set, the state ID will be used as name in the output
          #state_output: full

          # The state_output_diff setting changes whether or not the output from
          # successful states is returned. Useful when even the terse output of these
          # states is cluttering the logs. Set it to True to ignore them.
          #state_output_diff: False

          # The state_output_profile setting changes whether profile information
          # will be shown for each state run.
          #state_output_profile: True

          # Fingerprint of the master public key to validate the identity of your Salt master
          # before the initial key exchange. The master fingerprint can be found by running
          # "salt-key -F master" on the Salt master.
          #master_finger: ''

          ######         Thread settings        #####
          ###########################################
          # Disable multiprocessing support, by default when a minion receives a
          # publication a new process is spawned and the command is executed therein.
          #multiprocessing: True

          #####         Logging settings       #####
          ##########################################
          # The location of the minion log file
          # The minion log can be sent to a regular file, local path name, or network
          # location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
          # ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
          # format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
          #log_file: /var/log/salt/minion
          #log_file: file:///dev/log
          #log_file: udp://loghost:10514
          #
          #log_file: /var/log/salt/minion
          #key_logfile: /var/log/salt/key

          # The level of messages to send to the console.
          # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
          #
          # The following log levels are considered INSECURE and may log sensitive data:
          # ['garbage', 'trace', 'debug']
          #
          # Default: 'warning'
          #log_level: warning

          # The level of messages to send to the log file.
          # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
          # If using 'log_granular_levels' this must be set to the highest desired level.
          # Default: 'warning'
          #log_level_logfile:

          # The date and time format used in log messages. Allowed date/time formatting
          # can be seen here: http://docs.python.org/library/time.html#time.strftime
          #log_datefmt: '%H:%M:%S'
          #log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'

          # The format of the console logging messages. Allowed formatting options can
          # be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
          #
          # Console log colors are specified by these additional formatters:
          #
          # %(colorlevel)s
          # %(colorname)s
          # %(colorprocess)s
          # %(colormsg)s
          #
          # Since it is desirable to include the surrounding brackets, '[' and ']', in
          # the coloring of the messages, these color formatters also include padding as
          # well.  Color LogRecord attributes are only available for console logging.
          #
          #log_fmt_console: '%(colorlevel)s %(colormsg)s'
          #log_fmt_console: '[%(levelname)-8s] %(message)s'
          #
          #log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s'

          # This can be used to control logging levels more specificically.  This
          # example sets the main salt library at the 'warning' level, but sets
          # 'salt.modules' to log at the 'debug' level:
          #   log_granular_levels:
          #     'salt': 'warning'
          #     'salt.modules': 'debug'
          #
          #log_granular_levels: {}

          # To diagnose issues with minions disconnecting or missing returns, ZeroMQ
          # supports the use of monitor sockets # to log connection events. This
          # feature requires ZeroMQ 4.0 or higher.
          #
          # To enable ZeroMQ monitor sockets, set 'zmq_monitor' to 'True' and log at a
          # debug level or higher.
          #
          # A sample log event is as follows:
          #
          # [DEBUG   ] ZeroMQ event: {'endpoint': 'tcp://127.0.0.1:4505', 'event': 512,
          # 'value': 27, 'description': 'EVENT_DISCONNECTED'}
          #
          # All events logged will include the string 'ZeroMQ event'. A connection event
          # should be logged on the as the minion starts up and initially connects to the
          # master. If not, check for debug log level and that the necessary version of
          # ZeroMQ is installed.
          #
          #zmq_monitor: False

          ######      Module configuration      #####
          ###########################################
          # Salt allows for modules to be passed arbitrary configuration data, any data
          # passed here in valid yaml format will be passed on to the salt minion modules
          # for use. It is STRONGLY recommended that a naming convention be used in which
          # the module name is followed by a . and then the value. Also, all top level
          # data must be applied via the yaml dict construct, some examples:
          #
          # You can specify that all modules should run in test mode:
          #test: True
          #
          # A simple value for the test module:
          #test.foo: foo
          #
          # A list for the test module:
          #test.bar: [baz,quo]
          #
          # A dict for the test module:
          #test.baz: {spam: sausage, cheese: bread}
          #
          #
          ######      Update settings          ######
          ###########################################
          # Using the features in Esky, a salt minion can both run as a frozen app and
          # be updated on the fly. These options control how the update process
          # (saltutil.update()) behaves.
          #
          # The url for finding and downloading updates. Disabled by default.
          #update_url: False
          #
          # The list of services to restart after a successful update. Empty by default.
          #update_restart_services: []

          ######      Keepalive settings        ######
          ############################################
          # ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by
          # the OS. If connections between the minion and the master pass through
          # a state tracking device such as a firewall or VPN gateway, there is
          # the risk that it could tear down the connection the master and minion
          # without informing either party that their connection has been taken away.
          # Enabling TCP Keepalives prevents this from happening.

          # Overall state of TCP Keepalives, enable (1 or True), disable (0 or False)
          # or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled.
          #tcp_keepalive: True

          # How long before the first keepalive should be sent in seconds. Default 300
          # to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds
          # on Linux see /proc/sys/net/ipv4/tcp_keepalive_time.
          #tcp_keepalive_idle: 300

          # How many lost probes are needed to consider the connection lost. Default -1
          # to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes.
          #tcp_keepalive_cnt: -1

          # How often, in seconds, to send keepalives after the first one. Default -1 to
          # use OS defaults, typically 75 seconds on Linux, see
          # /proc/sys/net/ipv4/tcp_keepalive_intvl.
          #tcp_keepalive_intvl: -1

          ######   Windows Software settings    ######
          ############################################
          # Location of the repository cache file on the master:
          #win_repo_cachefile: 'salt://win/repo/winrepo.p'

          ######      Returner  settings        ######
          ############################################
          # Which returner(s) will be used for minion's result:
          #return: mysql

   Minion Blackout Configuration
       New in version 2016.3.0.

       Salt  supports  minion  blackouts. When a minion is in blackout mode, all remote execution
       commands are disabled. This allows production minions to be put “on hold”, eliminating the
       risk of an untimely configuration change.

       Minion blackouts are configured via a special pillar key, minion_blackout.  If this key is
       set  to  True,  then  the  minion  will  reject  all   incoming   commands,   except   for
       saltutil.refresh_pillar.  (The  exception  is  important, so minions can be brought out of
       blackout mode)

       Salt also supports an explicit whitelist of additional  functions  that  will  be  allowed
       during blackout. This is configured with the special pillar key minion_blackout_whitelist,
       which is formed as a list:

          minion_blackout_whitelist:
            - test.ping
            - pillar.get

   Access Control System
       New in version 0.10.4.

       Salt maintains a standard system used to open granular control to non administrative users
       to  execute  Salt commands. The access control system has been applied to all systems used
       to configure access to non administrative control interfaces in Salt.

       These interfaces include, the peer system, the external auth system and the publisher  acl
       system.

       The  access  control  system  mandated  a standard configuration syntax used in all of the
       three aforementioned systems. While  this  adds  functionality  to  the  configuration  in
       0.10.4, it does not negate the old configuration.

       Now  specific  functions  can  be opened up to specific minions from specific users in the
       case of external auth and publisher ACLs, and for specific minions in the case of the peer
       system.

   Publisher ACL system
       The  salt  publisher  ACL  system is a means to allow system users other than root to have
       access to execute select salt commands on minions from the master.

       The publisher  ACL  system  is  configured  in  the  master  configuration  file  via  the
       publisher_acl configuration option. Under the publisher_acl configuration option the users
       open to send commands are specified and then a list of the minion functions which will  be
       made  available  to  specified user.  Both users and functions could be specified by exact
       match, shell glob or regular expression. This configuration is much like the external_auth
       configuration:

          publisher_acl:
            # Allow thatch to execute anything.
            thatch:
              - .*
            # Allow fred to use test and pkg, but only on "web*" minions.
            fred:
              - web*:
                - test.*
                - pkg.*
            # Allow admin and managers to use saltutil module functions
            admin|manager_.*:
              - saltutil.*
            # Allow users to use only my_mod functions on "web*" minions with specific arguments.
            user_.*:
              - web*:
                - 'my_mod.*':
                    args:
                      - 'a.*'
                      - 'b.*'
                    kwargs:
                      'kwa': 'kwa.*'
                      'kwb': 'kwb'

   Permission Issues
       Directories  required  for  publisher_acl  must  be  modified  to be readable by the users
       specified:

          chmod 755 /var/cache/salt /var/cache/salt/master /var/cache/salt/master/jobs /var/run/salt /var/run/salt/master

       NOTE:
          In addition to the changes above you will  also  need  to  modify  the  permissions  of
          /var/log/salt  and  the  existing  log file to be writable by the user(s) which will be
          running the commands. If you do not wish to do this then you must  disable  logging  or
          Salt will generate errors as it cannot write to the logs as the system users.

       If  you are upgrading from earlier versions of salt you must also remove any existing user
       keys and re-start the Salt master:

          rm /var/cache/salt/.*key
          service salt-master restart

   Whitelist and Blacklist
       Salt’s authentication systems can be configured by specifying  what  is  allowed  using  a
       whitelist,  or  by  specifying  what  is  disallowed  using  a blacklist. If you specify a
       whitelist, only specified  operations  are  allowed.  If  you  specify  a  blacklist,  all
       operations are allowed except those that are blacklisted.

       See publisher_acl and publisher_acl_blacklist.

   External Authentication System
       Salt’s  External  Authentication  System  (eAuth)  allows for Salt to pass through command
       authorization to any external authentication system, such as PAM or LDAP.

       NOTE:
          eAuth using the PAM external auth system requires salt-master to be run as root as this
          system needs root access to check authentication.

   External Authentication System Configuration
       The  external  authentication  system  allows  for  specific users to be granted access to
       execute specific functions on  specific  minions.  Access  is  configured  in  the  master
       configuration file and uses the access control system:

          external_auth:
            pam:
              thatch:
                - 'web*':
                  - test.*
                  - network.*
              steve|admin.*:
                - .*

       The  above  configuration  allows  the  user  thatch  to execute functions in the test and
       network modules on the minions that match the web* target.  User steve and the users whose
       logins start with admin, are granted unrestricted access to minion commands.

       Salt  respects  the  current  PAM  configuration in place, and uses the ‘login’ service to
       authenticate.

       NOTE:
          The PAM module does not allow authenticating as root.

       NOTE:
          state.sls and state.highstate will return “Failed to  authenticate!”   if  the  request
          timeout is reached.  Use -t flag to increase the timeout

       To allow access to wheel modules or runner modules the following @ syntax must be used:

          external_auth:
            pam:
              thatch:
                - '@wheel'   # to allow access to all wheel modules
                - '@runner'  # to allow access to all runner modules
                - '@jobs'    # to allow access to the jobs runner and/or wheel module

       NOTE:
          The runner/wheel markup is different, since there are no minions to scope the acl to.

       NOTE:
          Globs  will  not match wheel or runners! They must be explicitly allowed with @wheel or
          @runner.

       WARNING:
          All  users  that  have  external  authentication  privileges   are   allowed   to   run
          saltutil.findjob.  Be  aware  that  this  could  inadvertently expose some data such as
          minion IDs.

   Matching syntax
       The structure of the external_auth dictionary can take  the  following  shapes.  User  and
       function  matches  are  exact  matches, shell glob patterns or regular expressions; minion
       matches are compound targets.

       By user:

          external_auth:
            <eauth backend>:
              <user or group%>:
                - <regex to match function>

       By user, by minion:

          external_auth:
            <eauth backend>:
              <user or group%>:
                <minion compound target>:
                  - <regex to match function>

       By user, by runner/wheel:

          external_auth:
            <eauth backend>:
              <user or group%>:
                <@runner or @wheel>:
                  - <regex to match function>

       By user, by runner+wheel module:

          external_auth:
            <eauth backend>:
              <user or group%>:
                <@module_name>:
                  - <regex to match function without module_name>

   Groups
       To apply permissions to a group of users in an external authentication system, append a  %
       to the ID:

          external_auth:
            pam:
              admins%:
                - '*':
                  - 'pkg.*'

   Limiting by function arguments
       Positional arguments or keyword arguments to functions can also be whitelisted.

       New in version 2016.3.0.

          external_auth:
            pam:
              my_user:
                - '*':
                  - 'my_mod.*':
                      args:
                        - 'a.*'
                        - 'b.*'
                      kwargs:
                        'kwa': 'kwa.*'
                        'kwb': 'kwb'
                - '@runner':
                  - 'runner_mod.*':
                      args:
                      - 'a.*'
                      - 'b.*'
                      kwargs:
                        'kwa': 'kwa.*'
                        'kwb': 'kwb'

       The rules:

       1. The arguments values are matched as regexp.

       2. If arguments restrictions are specified the only matched are allowed.

       3. If an argument isn’t specified any value is allowed.

       4. To  skip an arg use “everything” regexp .*. I.e. if arg0 and arg2 should be limited but
          arg1 and other arguments could have any value use:

             args:
               - 'value0'
               - '.*'
               - 'value2'

   Usage
       The external authentication system can then be used from the command-line by any  user  on
       the same system as the master with the -a option:

          $ salt -a pam web\* test.ping

       The system will ask the user for the credentials required by the authentication system and
       then publish the command.

   Tokens
       With external authentication alone, the authentication credentials will be  required  with
       every call to Salt. This can be alleviated with Salt tokens.

       Tokens  are short term authorizations and can be easily created by just adding a -T option
       when authenticating:

          $ salt -T -a pam web\* test.ping

       Now a token will be created that has an expiration of 12 hours (by default).   This  token
       is stored in a file named salt_token in the active user’s home directory.

       Once  the  token  is  created,  it  is  sent  with  all  subsequent  communications.  User
       authentication does not need to be entered again until the token expires.

       Token expiration time can be set in the Salt master config file.

   LDAP and Active Directory
       NOTE:
          LDAP usage requires that you have installed python-ldap.

       Salt supports both user and group authentication for LDAP (and Active  Directory  accessed
       via its LDAP interface)

   OpenLDAP and similar systems
       LDAP configuration happens in the Salt master configuration file.

       Server configuration values and their defaults:

          # Server to auth against
          auth.ldap.server: localhost

          # Port to connect via
          auth.ldap.port: 389

          # Use TLS when connecting
          auth.ldap.tls: False

          # LDAP scope level, almost always 2
          auth.ldap.scope: 2

          # Server specified in URI format
          auth.ldap.uri: ''    # Overrides .ldap.server, .ldap.port, .ldap.tls above

          # Verify server's TLS certificate
          auth.ldap.no_verify: False

          # Bind to LDAP anonymously to determine group membership
          # Active Directory does not allow anonymous binds without special configuration
          # In addition, if auth.ldap.anonymous is True, empty bind passwords are not permitted.
          auth.ldap.anonymous: False

          # FOR TESTING ONLY, this is a VERY insecure setting.
          # If this is True, the LDAP bind password will be ignored and
          # access will be determined by group membership alone with
          # the group memberships being retrieved via anonymous bind
          auth.ldap.auth_by_group_membership_only: False

          # Require authenticating user to be part of this Organizational Unit
          # This can be blank if your LDAP schema does not use this kind of OU
          auth.ldap.groupou: 'Groups'

          # Object Class for groups.  An LDAP search will be done to find all groups of this
          # class to which the authenticating user belongs.
          auth.ldap.groupclass: 'posixGroup'

          # Unique ID attribute name for the user
          auth.ldap.accountattributename: 'memberUid'

          # These are only for Active Directory
          auth.ldap.activedirectory: False
          auth.ldap.persontype: 'person'

          auth.ldap.minion_stripdomains: []

          # Redhat Identity Policy Audit
          auth.ldap.freeipa: False

   Authenticating to the LDAP Server
       There  are  two  phases to LDAP authentication.  First, Salt authenticates to search for a
       users’ Distinguished Name and group membership.  The user  it  authenticates  as  in  this
       phase  is  often  a  special LDAP system user with read-only access to the LDAP directory.
       After Salt searches the directory to  determine  the  actual  user’s  DN  and  groups,  it
       re-authenticates as the user running the Salt commands.

       If  you  are already aware of the structure of your DNs and permissions in your LDAP store
       are set such that users can look up their own group memberships, then the first and second
       users  can  be  the  same.   To  tell  Salt  this  is  the case, omit the auth.ldap.bindpw
       parameter.  Note this is not the same thing as using an anonymous bind.  Most LDAP servers
       will  not  permit  anonymous bind, and as mentioned above, if auth.ldap.anonymous is False
       you cannot use an empty password.

       You can template the binddn like this:

          auth.ldap.basedn: dc=saltstack,dc=com
          auth.ldap.binddn: uid={{ username }},cn=users,cn=accounts,dc=saltstack,dc=com

       Salt will use the password entered on the salt command line in place of the bindpw.

       To use two separate users, specify the LDAP lookup  user  in  the  binddn  directive,  and
       include a bindpw like so

          auth.ldap.binddn: uid=ldaplookup,cn=sysaccounts,cn=etc,dc=saltstack,dc=com
          auth.ldap.bindpw: mypassword

       As  mentioned  before,  Salt  uses  a  filter  to find the DN associated with a user. Salt
       substitutes the {{ username }} value for the username when querying LDAP

          auth.ldap.filter: uid={{ username }}

   Determining Group Memberships (OpenLDAP / non-Active Directory)
       For OpenLDAP, to determine group membership, one can specify an  OU  that  contains  group
       data.  This  is  prepended  to  the  basedn to create a search path.  Then the results are
       filtered against  auth.ldap.groupclass,  default  posixGroup,  and  the  account’s  ‘name’
       attribute, memberUid by default.

          auth.ldap.groupou: Groups

       Note  that  as  of  2017.7,  auth.ldap.groupclass  can  refer to either a groupclass or an
       objectClass.  For some  LDAP  servers  (notably  OpenLDAP  without  the  memberOf  overlay
       enabled)  to  determine  group  membership  we  need  to know both the objectClass and the
       memberUid attributes.  Usually for these servers you will want a  auth.ldap.groupclass  of
       posixGroup and an auth.ldap.groupattribute of memberUid.

       LDAP  servers with the memberOf overlay will have entries similar to auth.ldap.groupclass:
       person and auth.ldap.groupattribute: memberOf.

       When using the ldap('DC=domain,DC=com') eauth operator,  sometimes  the  records  returned
       from LDAP or Active Directory have fully-qualified domain names attached, while minion IDs
       instead are simple hostnames.  The parameter below allows the administrator to strip off a
       certain  set of domain names so the hostnames looked up in the directory service can match
       the minion IDs.

          auth.ldap.minion_stripdomains: ['.external.bigcorp.com', '.internal.bigcorp.com']

   Determining Group Memberships (Active Directory)
       Active Directory handles group membership differently, and does not  utilize  the  groupou
       configuration variable.  AD needs the following options in the master config:

          auth.ldap.activedirectory: True
          auth.ldap.filter: sAMAccountName={{username}}
          auth.ldap.accountattributename: sAMAccountName
          auth.ldap.groupclass: group
          auth.ldap.persontype: person

       To  determine  group membership in AD, the username and password that is entered when LDAP
       is requested as the eAuth mechanism on the command line is  used  to  bind  to  AD’s  LDAP
       interface.  If  this  fails, then it doesn’t matter what groups the user belongs to, he or
       she is denied access. Next, the distinguishedName of  the  user  is  looked  up  with  the
       following LDAP search:

          (&(<value of auth.ldap.accountattributename>={{username}})
            (objectClass=<value of auth.ldap.persontype>)
          )

       This  should  return  a  distinguishedName that we can use to filter for group membership.
       Then the following LDAP query is executed:

          (&(member=<distinguishedName from search above>)
            (objectClass=<value of auth.ldap.groupclass>)
          )

          external_auth:
            ldap:
              test_ldap_user:
                  - '*':
                      - test.ping

       To configure a LDAP group, append a % to the ID:

          external_auth:
            ldap:
              test_ldap_group%:
                - '*':
                  - test.echo

       In addition, if there are a set of computers in the directory service that should be  part
       of the eAuth definition, they can be specified like this:

          external_auth:
            ldap:
              test_ldap_group%:
                - ldap('DC=corp,DC=example,DC=com'):
                  - test.echo

       The  string  inside  ldap() above is any valid LDAP/AD tree limiter.  OU= in particular is
       permitted as long as it would return a list of computer objects.

   Peer Communication
       Salt 0.9.0 introduced the capability for Salt minions to publish commands. The  intent  of
       this  feature  is not for Salt minions to act as independent brokers one with another, but
       to allow Salt minions to pass commands to each other.

       In Salt 0.10.0 the ability to execute runners from the master was added. This  allows  for
       the  master  to  return  collective  data  from  runners  back to the minions via the peer
       interface.

       The peer interface is configured through two options in the master configuration file. For
       minions  to  send  commands  from  the master the peer configuration is used. To allow for
       minions to execute runners from the master the peer_run configuration is used.

       Since this presents a viable security risk  by  allowing  minions  access  to  the  master
       publisher  the  capability  is turned off by default. The minions can be allowed access to
       the master publisher on a per minion basis based  on  regular  expressions.  Minions  with
       specific ids can be allowed access to certain Salt modules and functions.

   Peer Configuration
       The  configuration  is  done under the peer setting in the Salt master configuration file,
       here are a number of configuration possibilities.

       The simplest approach is to enable  all  communication  for  all  minions,  this  is  only
       recommended for very secure environments.

          peer:
            .*:
              - .*

       This  configuration  will allow minions with IDs ending in example.com access to the test,
       ps, and pkg module functions.

          peer:
            .*example.com:
              - test.*
              - ps.*
              - pkg.*

       The configuration logic is simple, a regular expression is passed for matching minion ids,
       and  then  a  list  of  expressions matching minion functions is associated with the named
       minion. For instance, this configuration will  also  allow  minions  ending  with  foo.org
       access to the publisher.

          peer:
            .*example.com:
              - test.*
              - ps.*
              - pkg.*
            .*foo.org:
              - test.*
              - ps.*
              - pkg.*

       NOTE:
          Functions are matched using regular expressions.

   Peer Runner Communication
       Configuration to allow minions to execute runners from the master is done via the peer_run
       option on the master. The peer_run configuration  follows  the  same  logic  as  the  peer
       option. The only difference is that access is granted to runner modules.

       To open up access to all minions to all runners:

          peer_run:
            .*:
              - .*

       This  configuration will allow minions with IDs ending in example.com access to the manage
       and jobs runner functions.

          peer_run:
            .*example.com:
              - manage.*
              - jobs.*

       NOTE:
          Functions are matched using regular expressions.

   Using Peer Communication
       The publish module was created to manage peer communication. The publish module comes with
       a number of functions to execute peer communication in different ways. Currently there are
       three functions in the publish module. These examples will  show  how  to  test  the  peer
       system via the salt-call command.

       To execute test.ping on all minions:

          # salt-call publish.publish \* test.ping

       To execute the manage.up runner:

          # salt-call publish.runner manage.up

       To match minions using other matchers, use tgt_type:

          # salt-call publish.publish 'webserv* and not G@os:Ubuntu' test.ping tgt_type='compound'

       NOTE:
          In pre-2017.7.0 releases, use expr_form instead of tgt_type.

   When to Use Each Authentication System
       publisher_acl  is  useful  for  allowing  local  system users to run Salt commands without
       giving them root access. If you can log into the Salt master directly, then  publisher_acl
       allows  you  to  use  Salt  without  root privileges. If the local system is configured to
       authenticate against a remote system, like LDAP or Active  Directory,  then  publisher_acl
       will interact with the remote system transparently.

       external_auth is useful for salt-api or for making your own scripts that use Salt’s Python
       API. It can be used at the CLI (with the -a flag) but it is more cumbersome as  there  are
       more  steps  involved.   The only time it is useful at the CLI is when the local system is
       not configured to authenticate against an external service but  you  still  want  Salt  to
       authenticate against an external service.

   Examples
       The access controls are manifested using matchers in these configurations:

          publisher_acl:
            fred:
              - web\*:
                - pkg.list_pkgs
                - test.*
                - apache.*

       In  the  above  example,  fred  is  able  to send commands only to minions which match the
       specified glob target. This can be expanded to include other functions for  other  minions
       based on standard targets (all matchers are supported except the compound one).

          external_auth:
            pam:
              dave:
                - test.ping
                - mongo\*:
                  - network.*
                - log\*:
                  - network.*
                  - pkg.*
                - 'G@os:RedHat':
                  - kmod.*
              steve:
                - .*

       The  above allows for all minions to be hit by test.ping by dave, and adds a few functions
       that dave can execute on other minions. It also allows steve unrestricted access  to  salt
       commands.

       NOTE:
          Functions are matched using regular expressions.

   Job Management
       New in version 0.9.7.

       Since  Salt  executes  jobs  running on many systems, Salt needs to be able to manage jobs
       running on many systems.

   The Minion proc System
       Salt Minions maintain a proc directory in the Salt cachedir. The proc directory  maintains
       files  named  after  the  executed  job  ID. These files contain the information about the
       current running jobs on the minion and allow for jobs to be looked up. This is located  in
       the  proc  directory  under  the  cachedir,  with  a  default  configuration  it  is under
       /var/cache/salt/proc.

   Functions in the saltutil Module
       Salt 0.9.7 introduced a few new functions to the saltutil module for managing jobs.  These
       functions are:

       1. running Returns the data of all running jobs that are found in the proc directory.

       2. find_job Returns specific data about a certain job based on job id.

       3. signal_job Allows for a given jid to be sent a signal.

       4. term_job  Sends  a  termination  signal  (SIGTERM,  15)  to the process controlling the
          specified job.

       5. kill_job Sends a kill signal (SIGKILL, 9) to the process controlling the specified job.

       These functions make up the core of the back end used to manage jobs at the minion level.

   The jobs Runner
       A convenience runner front end and reporting system has been  added  as  well.   The  jobs
       runner contains functions to make viewing data easier and cleaner.

       The jobs runner contains a number of functions…

   active
       The active function runs saltutil.running on all minions and formats the return data about
       all running jobs in a much more usable and compact format.  The active function will  also
       compare  jobs  that have returned and jobs that are still running, making it easier to see
       what systems have completed a job and what systems are still being waited on.

          # salt-run jobs.active

   lookup_jid
       When jobs are executed the return data is sent back to the master and cached.  By  default
       it  is  cached  for  24  hours, but this can be configured via the keep_jobs option in the
       master configuration.  Using the lookup_jid runner will display the same return data  that
       the initial job invocation with the salt command would display.

          # salt-run jobs.lookup_jid <job id number>

   list_jobs
       Before finding a historic job, it may be required to find the job id. list_jobs will parse
       the cached execution data and display all of the job data for jobs that have  already,  or
       partially returned.

          # salt-run jobs.list_jobs

   Scheduling Jobs
       Salt’s  scheduling  system  allows  incremental  executions  on minions or the master. The
       schedule system exposes the execution of any execution function on minions or  any  runner
       on the master.

       Scheduling can be enabled by multiple methods:

       · schedule  option  in either the master or minion config files.  These require the master
         or minion application to be restarted in order for the schedule to be implemented.

       · Minion pillar data.  Schedule is implemented by refreshing the minion’s pillar data, for
         example by using saltutil.refresh_pillar.

       · The schedule state or schedule module

       NOTE:
          The  scheduler  executes different functions on the master and minions. When running on
          the master the functions reference runner functions, when running  on  the  minion  the
          functions specify execution functions.

       A  scheduled  run  has  no  output on the minion unless the config is set to info level or
       higher. Refer to minion-logging-settings.

       States are executed on the minion, as all states are. You can  pass  positional  arguments
       and provide a YAML dict of named arguments.

          schedule:
            job1:
              function: state.sls
              seconds: 3600
              args:
                - httpd
              kwargs:
                test: True

       This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour).

          schedule:
            job1:
              function: state.sls
              seconds: 3600
              args:
                - httpd
              kwargs:
                test: True
              splay: 15

       This  will schedule the command: state.sls httpd test=True every 3600 seconds (every hour)
       splaying the time between 0 and 15 seconds.

          schedule:
            job1:
              function: state.sls
              seconds: 3600
              args:
                - httpd
              kwargs:
                test: True
              splay:
                start: 10
                end: 15

       This will schedule the command: state.sls httpd test=True every 3600 seconds (every  hour)
       splaying the time between 10 and 15 seconds.

   Schedule by Date and Time
       New in version 2014.7.0.

       Frequency  of  jobs  can  also  be  specified  using  date strings supported by the Python
       dateutil library. This requires the Python dateutil library to be installed.

          schedule:
            job1:
              function: state.sls
              args:
                - httpd
              kwargs:
                test: True
              when: 5:00pm

       This will schedule the command: state.sls httpd test=True at 5:00 PM minion localtime.

          schedule:
            job1:
              function: state.sls
              args:
                - httpd
              kwargs:
                test: True
              when:
                - Monday 5:00pm
                - Tuesday 3:00pm
                - Wednesday 5:00pm
                - Thursday 3:00pm
                - Friday 5:00pm

       This will schedule the command: state.sls httpd test=True at 5:00 PM on Monday,  Wednesday
       and Friday, and 3:00 PM on Tuesday and Thursday.

          schedule:
            job1:
              function: state.sls
              args:
                - httpd
              kwargs:
                test: True
              when:
                - 'tea time'

          whens:
            tea time: 1:40pm
            deployment time: Friday 5:00pm

       The  Salt  scheduler  also allows custom phrases to be used for the when parameter.  These
       whens can be stored as either pillar values or grain values.

          schedule:
            job1:
              function: state.sls
              seconds: 3600
              args:
                - httpd
              kwargs:
                test: True
              range:
                start: 8:00am
                end: 5:00pm

       This will schedule the command: state.sls httpd test=True every 3600 seconds (every  hour)
       between  the  hours  of 8:00 AM and 5:00 PM. The range parameter must be a dictionary with
       the date strings using the dateutil format.

          schedule:
            job1:
              function: state.sls
              seconds: 3600
              args:
                - httpd
              kwargs:
                test: True
              range:
                invert: True
                start: 8:00am
                end: 5:00pm

       Using the invert option  for  range,  this  will  schedule  the  command  state.sls  httpd
       test=True  every  3600 seconds (every hour) until the current time is between the hours of
       8:00 AM and 5:00 PM. The range parameter must be a dictionary with the date strings  using
       the dateutil format.

          schedule:
            job1:
              function: pkg.install
              kwargs:
                pkgs: [{'bar': '>1.2.3'}]
                refresh: true
              once: '2016-01-07T14:30:00'

       This will schedule the function pkg.install to be executed once at the specified time. The
       schedule  entry  job1  will  not  be  removed  after  the  job  completes,  therefore  use
       schedule.delete to manually remove it afterwards.

       The  default date format is ISO 8601 but can be overridden by also specifying the once_fmt
       option, like this:

          schedule:
            job1:
              function: test.ping
              once: 2015-04-22T20:21:00
              once_fmt: '%Y-%m-%dT%H:%M:%S'

   Maximum Parallel Jobs Running
       New in version 2014.7.0.

       The scheduler also supports ensuring that there are no more than N copies of a  particular
       routine  running.  Use this for jobs that may be long-running and could step on each other
       or pile up in case of infrastructure outage.

       The default for maxrunning is 1.

          schedule:
            long_running_job:
              function: big_file_transfer
              jid_include: True
              maxrunning: 1

   Cron-like Schedule
       New in version 2014.7.0.

          schedule:
            job1:
              function: state.sls
              cron: '*/15 * * * *'
              args:
                - httpd
              kwargs:
                test: True

       The scheduler also supports scheduling jobs using a cron like format.  This  requires  the
       Python croniter library.

   Job Data Return
       New in version 2015.5.0.

       By  default,  data  about  jobs  runs  from  the Salt scheduler is returned to the master.
       Setting the return_job parameter to False will prevent the data from being  sent  back  to
       the Salt master.

          schedule:
            job1:
              function: scheduled_job_function
              return_job: False

   Job Metadata
       New in version 2015.5.0.

       It  can  be  useful to include specific data to differentiate a job from other jobs. Using
       the metadata parameter special values can be associated with a scheduled job. These values
       are  not  used  in  the  execution of the job, but can be used to search for specific jobs
       later if combined with the return_job parameter. The metadata parameter must be  specified
       as a dictionary, othewise it will be ignored.

          schedule:
            job1:
              function: scheduled_job_function
              metadata:
                foo: bar

   Run on Start
       New in version 2015.5.0.

       By  default,  any  job  scheduled  based  on  the  startup time of the minion will run the
       scheduled job when the minion starts up. Sometimes this  is  not  the  desired  situation.
       Using  the run_on_start parameter set to False will cause the scheduler to skip this first
       run and wait until the next scheduled run:

          schedule:
            job1:
              function: state.sls
              seconds: 3600
              run_on_start: False
              args:
                - httpd
              kwargs:
                test: True

   Until and After
       New in version 2015.8.0.

          schedule:
            job1:
              function: state.sls
              seconds: 15
              until: '12/31/2015 11:59pm'
              args:
                - httpd
              kwargs:
                test: True

       Using the until argument, the Salt scheduler allows you to  specify  an  end  time  for  a
       scheduled  job.  If  this argument is specified, jobs will not run once the specified time
       has passed. Time should be specified in a format supported by the dateutil library.   This
       requires the Python dateutil library to be installed.

       New in version 2015.8.0.

          schedule:
            job1:
              function: state.sls
              seconds: 15
              after: '12/31/2015 11:59pm'
              args:
                - httpd
              kwargs:
                test: True

       Using  the  after  argument,  the Salt scheduler allows you to specify an start time for a
       scheduled job.  If this argument is specified, jobs will not run until the specified  time
       has  passed. Time should be specified in a format supported by the dateutil library.  This
       requires the Python dateutil library to be installed.

   Scheduling States
          schedule:
            log-loadavg:
              function: cmd.run
              seconds: 3660
              args:
                - 'logger -t salt < /proc/loadavg'
              kwargs:
                stateful: False
                shell: /bin/sh

   Scheduling Highstates
       To set up a highstate to run on a minion every 60 minutes set this in the minion config or
       pillar:

          schedule:
            highstate:
              function: state.highstate
              minutes: 60

       Time intervals can be specified as seconds, minutes, hours, or days.

   Scheduling Runners
       Runner  executions  can  also  be  specified on the master within the master configuration
       file:

          schedule:
            run_my_orch:
              function: state.orchestrate
              hours: 6
              splay: 600
              args:
                - orchestration.my_orch

       The above configuration is analogous to running salt-run state.orch  orchestration.my_orch
       every 6 hours.

   Scheduler With Returner
       The scheduler is also useful for tasks like gathering monitoring data about a minion, this
       schedule option will gather status data and send it to a MySQL returner database:

          schedule:
            uptime:
              function: status.uptime
              seconds: 60
              returner: mysql
            meminfo:
              function: status.meminfo
              minutes: 5
              returner: mysql

       Since specifying the returner repeatedly can be tiresome, the schedule_returner option  is
       available  to  specify  one  or  a list of global returners to be used by the minions when
       scheduling.

   Managing the Job Cache
       The Salt Master maintains a job cache of all job executions which can be queried  via  the
       jobs runner. This job cache is called the Default Job Cache.

   Default Job Cache
       A  number  of  options  are  available when configuring the job cache. The default caching
       system uses local storage on the Salt Master and can be found in the job  cache  directory
       (on  Linux  systems  this  is  typically /var/cache/salt/master/jobs). The default caching
       system is suitable for most deployments as it  does  not  typically  require  any  further
       configuration or management.

       The  default  job  cache is a temporary cache and jobs will be stored for 24 hours. If the
       default cache needs to store jobs for a different period the time can be  easily  adjusted
       by  changing  the  keep_jobs  parameter  in  the Salt Master configuration file. The value
       passed in is measured via hours:

          keep_jobs: 24

   Reducing the Size of the Default Job Cache
       The Default Job Cache can sometimes be a burden on larger deployments (over 5000 minions).
       Disabling  the job cache will make previously executed jobs unavailable to the jobs system
       and is not generally recommended. Normally it is wise to make sure the master  has  access
       to a faster IO system or a tmpfs is mounted to the jobs dir.

       However,  you  can  disable  the  job_cache  by  setting  it  to  False in the Salt Master
       configuration file. Setting this value to False means that the Salt Master will no  longer
       cache minion returns, but a JID directory and jid file for each job will still be created.
       This JID directory is necessary for checking for and preventing JID collisions.

       The default location for the job cache is in the /var/cache/salt/master/jobs/ directory.

       Setting the job_cache to False in addition to setting the keep_jobs option  to  a  smaller
       value,  such  as  1,  in  the  Salt  Master configuration file will reduce the size of the
       Default Job Cache, and thus the burden on the Salt Master.

       NOTE:
          Changing the keep_jobs option sets the number of hours to keep old job information  and
          defaults  to 24 hours. Do not set this value to 0 when trying to make the cache cleaner
          run more frequently, as this means the cache cleaner will never run.

   Additional Job Cache Options
       Many deployments may wish to use an external database to maintain a long term register  of
       executed  jobs.  Salt  comes with two main mechanisms to do this, the master job cache and
       the external job cache.

       See Storing Job Results in an External System.

   Storing Job Results in an External System
       After a job executes, job results are returned to the Salt Master  by  each  Salt  Minion.
       These results are stored in the Default Job Cache.

       In  addition to the Default Job Cache, Salt provides two additional mechanisms to send job
       results to other systems (databases, local syslog, and others):

       · External Job Cache

       · Master Job Cache

       The major difference between these two mechanism is from where results are returned  (from
       the  Salt  Master  or Salt Minion). Configuring either of these options will also make the
       Jobs Runner functions to automatically query the remote stores for information.

   External Job Cache - Minion-Side Returner
       When an External Job Cache is configured, data is returned to the Default Job Cache on the
       Salt  Master  like  usual, and then results are also sent to an External Job Cache using a
       Salt returner module running on the Salt Minion.  [image]

       · Advantages: Data is stored without placing additional load on the Salt Master.

       · Disadvantages: Each Salt Minion connects to the external job cache, which can result  in
         a  large  number of connections.  Also requires additional configuration to get returner
         module settings on all Salt Minions.

   Master Job Cache - Master-Side Returner
       New in version 2014.7.0.

       Instead of configuring an External Job Cache on each Salt Minion, you  can  configure  the
       Master  Job Cache to send job results from the Salt Master instead. In this configuration,
       Salt Minions send data to the Default Job Cache as usual, and then the Salt  Master  sends
       the  data  to the external system using a Salt returner module running on the Salt Master.
       [image]

       · Advantages: A single connection is required to the external system.  This  is  preferred
         for databases and similar systems.

       · Disadvantages: Places additional load on your Salt Master.

   Configure an External or Master Job Cache
   Step 1: Understand Salt Returners
       Before  you  configure  a  job  cache, it is essential to understand Salt returner modules
       (“returners”). Returners are pluggable Salt Modules that take the data returned  by  jobs,
       and  then perform any necessary steps to send the data to an external system. For example,
       a returner might establish a connection, authenticate, and then format and transfer data.

       The Salt Returner system provides the core functionality used by the External  and  Master
       Job Cache systems, and the same returners are used by both systems.

       Salt currently provides many different returners that let you connect to a wide variety of
       systems. A complete list is available at all Salt returners.  Each returner is  configured
       differently, so make sure you read and follow the instructions linked from that page.

       For example, the MySQL returner requires:

       · A database created using provided schema (structure is available at MySQL returner)

       · A user created with privileges to the database

       · Optional SSL configuration

       A simpler returner, such as Slack or HipChat, requires:

       · An API key/version

       · The target channel/room

       · The username that should be used to send the message

   Step 2: Configure the Returner
       After   you  understand  the  configuration  and  have  the  external  system  ready,  the
       configuration requirements must be declared.

   External Job Cache
       The returner configuration settings can be declared in the Salt Minion configuration file,
       the Minion’s pillar data, or the Minion’s grains.

       If  external_job_cache  configuration  settings  are specified in more than one place, the
       options are retrieved in the following order. The first  configuration  location  that  is
       found is the one that will be used.

       · Minion configuration file

       · Minion’s grains

       · Minion’s pillar data

   Master Job Cache
       The  returner  configuration  settings  for the Master Job Cache should be declared in the
       Salt Master’s configuration file.

   Configuration File Examples
       MySQL requires:

          mysql.host: 'salt'
          mysql.user: 'salt'
          mysql.pass: 'salt'
          mysql.db: 'salt'
          mysql.port: 3306

       Slack requires:

          slack.channel: 'channel'
          slack.api_key: 'key'
          slack.from_name: 'name'

       After you have configured the returner and added settings to the configuration  file,  you
       can enable the External or Master Job Cache.

   Step 3: Enable the External or Master Job Cache
       Configuration  is  a  single  line that specifies an already-configured returner to use to
       send all job data to an external system.

   External Job Cache
       To enable a returner as the External Job Cache (Minion-side), add the  following  line  to
       the Salt Master configuration file:

          ext_job_cache: <returner>

       For example:

          ext_job_cache: mysql

       NOTE:
          When  configuring  an External Job Cache (Minion-side), the returner settings are added
          to the Minion configuration file, but the External Job Cache setting is  configured  in
          the Master configuration file.

   Master Job Cache
       To  enable  a  returner as a Master Job Cache (Master-side), add the following line to the
       Salt Master configuration file:

          master_job_cache: <returner>

       For example:

          master_job_cache: mysql

       Verify that the returner configuration settings are in the Master configuration file,  and
       be  sure to restart the salt-master service after you make configuration changes. (service
       salt-master restart).

   Logging
       The salt project tries to get the logging to work for you and help us solve any issues you
       might find along the way.

       If  you  want  to  get some more information on the nitty-gritty of salt’s logging system,
       please head over to the logging development  document,  if  all  you’re  after  is  salt’s
       logging configurations, please continue reading.

   Log Levels
       The log levels are ordered numerically such that setting the log level to a specific level
       will record all log statements at that level and higher.  For example, setting  log_level:
       error will log statements at error, critical, and quiet levels, although nothing should be
       logged at quiet level.

       Most of the logging levels are defined by default in Python’s logging library and  can  be
       found in the official Python documentation.  Salt uses some more levels in addition to the
       standard levels.  All levels available in salt are shown in the table below.

       NOTE:
          Python dependencies used by salt may define and use  additional  logging  levels.   For
          example,  the  Python 2 version of the multiprocessing standard Python library uses the
          levels subwarning, 25 and subdebug, 5.

                         ┌─────────┬───────────────┬──────────────────────────┐
                         │Level    │ Numeric value │ Description              │
                         ├─────────┼───────────────┼──────────────────────────┤
                         │quiet    │ 1000          │ Nothing should be logged │
                         │         │               │ at this level            │
                         ├─────────┼───────────────┼──────────────────────────┤
                         │critical │ 50            │ Critical errors          │
                         ├─────────┼───────────────┼──────────────────────────┤
                         │error    │ 40            │ Errors                   │
                         ├─────────┼───────────────┼──────────────────────────┤
                         │warning  │ 30            │ Warnings                 │
                         ├─────────┼───────────────┼──────────────────────────┤
                         │info     │ 20            │ Normal log information   │
                         ├─────────┼───────────────┼──────────────────────────┤
                         │profile  │ 15            │ Profiling information on │
                         │         │               │ salt performance         │
                         ├─────────┼───────────────┼──────────────────────────┤
                         │debug    │ 10            │ Information  useful  for │
                         │         │               │ debugging    both   salt │
                         │         │               │ implementations and salt │
                         │         │               │ code                     │
                         ├─────────┼───────────────┼──────────────────────────┤
                         │trace    │ 5             │ More    detailed    code │
                         │         │               │ debugging information    │
                         ├─────────┼───────────────┼──────────────────────────┤
                         │garbage  │ 1             │ Even   more    debugging │
                         │         │               │ information              │
                         ├─────────┼───────────────┼──────────────────────────┤
                         │all      │ 0             │ Everything               │
                         └─────────┴───────────────┴──────────────────────────┘

   Available Configuration Settings
   log_file
       The  log  records  can  be  sent  to a regular file, local path name, or network location.
       Remote logging works best when configured to use rsyslogd(8) (e.g.: file:///dev/log), with
       rsyslogd(8) configured for network logging.  The format for remote addresses is:

          <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>

       Where  log-facility  is  the  symbolic  name  of  a  syslog  facility  as  defined  in the
       SysLogHandler documentation. It defaults to LOG_USER.

       Default:  Dependent  of  the  binary  being  executed,  for  example,   for   salt-master,
       /var/log/salt/master.

       Examples:

          log_file: /var/log/salt/master

          log_file: /var/log/salt/minion

          log_file: file:///dev/log

          log_file: file:///dev/log/LOG_DAEMON

          log_file: udp://loghost:10514

   log_level
       Default: warning

       The  level  of  log  record  messages  to send to the console. One of all, garbage, trace,
       debug, profile, info, warning, error, critical, quiet.

          log_level: warning

       NOTE:
          Add log_level: quiet in salt configuration file to completely disable logging. In  case
          of running salt in command line use --log-level=quiet instead.

   log_level_logfile
       Default: info

       The level of messages to send to the log file. One of all, garbage, trace, debug, profile,
       info, warning, error, critical, quiet.

          log_level_logfile: warning

   log_datefmt
       Default: %H:%M:%S

       The date and time format used  in  console  log  messages.  Allowed  date/time  formatting
       matches those used in time.strftime().

          log_datefmt: '%H:%M:%S'

   log_datefmt_logfile
       Default: %Y-%m-%d %H:%M:%S

       The  date  and time format used in log file messages. Allowed date/time formatting matches
       those used in time.strftime().

          log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'

   log_fmt_console
       Default: [%(levelname)-8s] %(message)s

       The format of  the  console  logging  messages.  All  standard  python  logging  LogRecord
       attributes  can  be used. Salt also provides these custom LogRecord attributes to colorize
       console log output:

          '%(colorlevel)s'   # log level name colorized by level
          '%(colorname)s'    # colorized module name
          '%(colorprocess)s' # colorized process number
          '%(colormsg)s'     # log message colorized by level

       NOTE:
          The  %(colorlevel)s,  %(colorname)s,  and  %(colorprocess)  LogRecord  attributes  also
          include  padding  and  enclosing brackets, [ and ] to match the default values of their
          collateral non-colorized LogRecord attributes.

          log_fmt_console: '[%(levelname)-8s] %(message)s'

   log_fmt_logfile
       Default: %(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s

       The format of the log  file  logging  messages.  All  standard  python  logging  LogRecord
       attributes can be used.  Salt also provides these custom LogRecord attributes that include
       padding and enclosing brackets [ and ]:

          '%(bracketlevel)s'   # equivalent to [%(levelname)-8s]
          '%(bracketname)s'    # equivalent to [%(name)-17s]
          '%(bracketprocess)s' # equivalent to [%(process)5s]

          log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s'

   log_granular_levels
       Default: {}

       This can be used to control logging levels more specifically, based on log call name.  The
       example sets the main salt library at the ‘warning’ level, sets salt.modules to log at the
       debug level, and sets a custom module to the all level:

          log_granular_levels:
            'salt': 'warning'
            'salt.modules': 'debug'
            'salt.loader.saltmaster.ext.module.custom_module': 'all'

   External Logging Handlers
       Besides the internal logging handlers used by salt, there are some external which  can  be
       used, see the external logging handlers document.

   External Logging Handlers
                              ┌──────────────┬───────────────────────────┐
                              │fluent_mod    │ Fluent Logging Handler    │
                              ├──────────────┼───────────────────────────┤
                              │log4mongo_mod │ Log4Mongo Logging Handler │
                              ├──────────────┼───────────────────────────┤
                              │logstash_mod  │ Logstash Logging Handler  │
                              ├──────────────┼───────────────────────────┤
                              │sentry_mod    │ Sentry Logging Handler    │
                              └──────────────┴───────────────────────────┘

   salt.log.handlers.fluent_mod
   Fluent Logging Handler
       New in version 2015.8.0.

       This module provides some fluentd logging handlers.

   Fluent Logging Handler
       In the fluent configuration file:

          <source>
            type forward
            bind localhost
            port 24224
          </source>

       Then,  to  send  logs via fluent in Logstash format, add the following to the salt (master
       and/or minion) configuration file:

          fluent_handler:
            host: localhost
            port: 24224

       To send logs via fluent in the Graylog raw json format, add  the  following  to  the  salt
       (master and/or minion) configuration file:

          fluent_handler:
            host: localhost
            port: 24224
            payload_type: graylog
            tags:
            - salt_master.SALT

       The  above  also  illustrates  the  tags  option,  which allows one to set descriptive (or
       useful) tags on records being sent.  If not provided, this defaults  to  the  single  tag:
       ‘salt’.   Also note that, via Graylog “magic”, the ‘facility’ of the logged message is set
       to ‘SALT’ (the portion of the tag after the first period), while the tag  itself  will  be
       set to simply ‘salt_master’.  This is a feature, not a bug :)

       Note:  There  is  a  third emitter, for the GELF format, but it is largely untested, and I
       don’t currently have a setup supporting this config, so while it runs cleanly and  outputs
       what  LOOKS  to be valid GELF, any real-world feedback on its usefulness, and correctness,
       will be appreciated.

   Log Level
       The fluent_handler configuration section accepts an additional setting log_level.  If  not
       set,  the  logging  level  used  will  be  the  one  defined  for  log_level in the global
       configuration file section.

          Inspiration

                 This work was inspired in fluent-logger-python

   salt.log.handlers.log4mongo_mod
   Log4Mongo Logging Handler
       This module provides a logging handler for sending salt logs to MongoDB

   Configuration
       In the salt configuration file (e.g. /etc/salt/{master,minion}):

          log4mongo_handler:
            host: mongodb_host
            port: 27017
            database_name: logs
            collection: salt_logs
            username: logging
            password: reindeerflotilla
            write_concern: 0
            log_level: warning

   Log Level
       If not set, the log_level will be set to the level defined  in  the  global  configuration
       file setting.

          Inspiration

                 This  work was inspired by the Salt logging handlers for LogStash and Sentry and
                 by the log4mongo Python implementation.

   salt.log.handlers.logstash_mod
   Logstash Logging Handler
       New in version 0.17.0.

       This module provides some Logstash logging handlers.

   UDP Logging Handler
       For versions of Logstash before 1.2.0:

       In the salt configuration file:

          logstash_udp_handler:
            host: 127.0.0.1
            port: 9999
            version: 0
            msg_type: logstash

       In the Logstash configuration file:

          input {
            udp {
              type => "udp-type"
              format => "json_event"
            }
          }

       For version 1.2.0 of Logstash and newer:

       In the salt configuration file:

          logstash_udp_handler:
            host: 127.0.0.1
            port: 9999
            version: 1
            msg_type: logstash

       In the Logstash configuration file:

          input {
            udp {
              port => 9999
              codec => json
            }
          }

       Please read the UDP input configuration page for additional information.

   ZeroMQ Logging Handler
       For versions of Logstash before 1.2.0:

       In the salt configuration file:

          logstash_zmq_handler:
            address: tcp://127.0.0.1:2021
            version: 0

       In the Logstash configuration file:

          input {
            zeromq {
              type => "zeromq-type"
              mode => "server"
              topology => "pubsub"
              address => "tcp://0.0.0.0:2021"
              charset => "UTF-8"
              format => "json_event"
            }
          }

       For version 1.2.0 of Logstash and newer:

       In the salt configuration file:

          logstash_zmq_handler:
            address: tcp://127.0.0.1:2021
            version: 1

       In the Logstash configuration file:

          input {
            zeromq {
              topology => "pubsub"
              address => "tcp://0.0.0.0:2021"
              codec => json
            }
          }

       Please read the ZeroMQ input configuration page for additional information.

          Important Logstash Setting

                 One of the most important settings that you should not forget on  your  Logstash
                 configuration file regarding these logging handlers is format.  Both the UDP and
                 ZeroMQ inputs need to have format as json_event which is what we send  over  the
                 wire.

   Log Level
       Both  the  logstash_udp_handler and the logstash_zmq_handler configuration sections accept
       an additional setting log_level. If not set, the  logging  level  used  will  be  the  one
       defined for log_level in the global configuration file section.

   HWM
       The   high   water   mark   for   the   ZMQ   socket  setting.  Only  applicable  for  the
       logstash_zmq_handler.

          Inspiration

                 This work was inspired in pylogstash,  python-logstash,  canary  and  the  PyZMQ
                 logging handler.

   salt.log.handlers.sentry_mod
   Sentry Logging Handler
       New in version 0.17.0.

       This  module  provides  a  Sentry logging handler. Sentry is an open source error tracking
       platform that provides deep context about exceptions that happen  in  production.  Details
       about  stack traces along with the context variables available at the time of the exeption
       are easily browsable and filterable from the online interface. For more details please see
       Sentry.

          Note

                 The  Raven  library needs to be installed on the system for this logging handler
                 to be available.

       Configuring the python Sentry client, Raven,  should  be  done  under  the  sentry_handler
       configuration key. Additional context may be provided for corresponding grain item(s).  At
       the bare minimum, you need to define the DSN. As an example:

          sentry_handler:
            dsn: https://pub-key:secret-key@app.getsentry.com/app-id

       More complex configurations can be achieved, for example:

          sentry_handler:
            servers:
              - https://sentry.example.com
              - http://192.168.1.1
            project: app-id
            public_key: deadbeefdeadbeefdeadbeefdeadbeef
            secret_key: beefdeadbeefdeadbeefdeadbeefdead
            context:
              - os
              - master
              - saltversion
              - cpuarch
              - ec2.tags.environment

          Note

                 The public_key and secret_key variables are not supported with Sentry > 3.0. The
                 DSN key should be used instead.

       All   the   client   configuration  keys  are  supported,  please  see  the  Raven  client
       documentation.

       The default logging level for the sentry handler  is  ERROR.  If  you  wish  to  define  a
       different one, define log_level under the sentry_handler configuration key:

          sentry_handler:
            dsn: https://pub-key:secret-key@app.getsentry.com/app-id
            log_level: warning

       The   available  log  levels  are  those  also  available  for  the  salt  cli  tools  and
       configuration; salt --help should give you the required information.

   Threaded Transports
       Raven’s documents rightly suggest using its threaded transport for critical  applications.
       However,  don’t  forget  that  if  you  start having troubles with Salt after enabling the
       threaded transport, please try switching to a non-threaded transport to see if that  fixes
       your problem.

   Salt File Server
       Salt  comes with a simple file server suitable for distributing files to the Salt minions.
       The file server is a stateless ZeroMQ server that is built into the Salt master.

       The main intent of the Salt file server is to present files for  use  in  the  Salt  state
       system.  With  this  said,  the Salt file server can be used for any general file transfer
       from the master to the minions.

   File Server Backends
       In Salt 0.12.0, the modular fileserver was introduced. This feature added the ability  for
       the  Salt  Master  to integrate different file server backends. File server backends allow
       the Salt file server to act as a transparent bridge to external resources. A good  example
       of  this is the git backend, which allows Salt to serve files sourced from one or more git
       repositories, but there are several others as well. Click here for a full list  of  Salt’s
       fileserver backends.

   Enabling a Fileserver Backend
       Fileserver backends can be enabled with the fileserver_backend option.

          fileserver_backend:
            - git

       See   the   documentation   for  each  backend  to  find  the  correct  value  to  add  to
       fileserver_backend in order to enable them.

   Using Multiple Backends
       If fileserver_backend is not defined in the Master config file, Salt will  use  the  roots
       backend,  but the fileserver_backend option supports multiple backends. When more than one
       backend is in use, the files from the enabled backends are merged into  a  single  virtual
       filesystem.  When  a  file  is  requested, the backends will be searched in order for that
       file, and the first backend to match will be the one which returns the file.

          fileserver_backend:
            - roots
            - git

       With this configuration, the environments and files defined in  the  file_roots  parameter
       will  be searched first, and if the file is not found then the git repositories defined in
       gitfs_remotes will be searched.

   Defining Environments
       Just as the order of the values in fileserver_backend matters, so too does  the  order  in
       which  different  sources  are defined within a fileserver environment. For example, given
       the    below    file_roots    configuration,    if    both    /srv/salt/dev/foo.txt    and
       /srv/salt/prod/foo.txt   exist   on   the  Master,  then  salt://foo.txt  would  point  to
       /srv/salt/dev/foo.txt in the dev environment, but it would point to /srv/salt/prod/foo.txt
       in the base environment.

          file_roots:
            base:
              - /srv/salt/prod
            qa:
              - /srv/salt/qa
              - /srv/salt/prod
            dev:
              - /srv/salt/dev
              - /srv/salt/qa
              - /srv/salt/prod

       Similarly,  when using the git backend, if both repositories defined below have a hotfix23
       branch/tag, and both of them also contain the file bar.txt in the root of  the  repository
       at  that  branch/tag, then salt://bar.txt in the hotfix23 environment would be served from
       the first repository.

          gitfs_remotes:
            - https://mydomain.tld/repos/first.git
            - https://mydomain.tld/repos/second.git

       NOTE:
          Environments map differently  based  on  the  fileserver  backend.  For  instance,  the
          mappings  are  explicitly defined in roots backend, while in the VCS backends (git, hg,
          svn) the environments are created  from  branches/tags/bookmarks/etc.  For  the  minion
          backend,  the  files  are  all  in  a  single  environment,  which  is specified by the
          minionfs_env option.

          See the documentation  for  each  backend  for  a  more  detailed  explanation  of  how
          environments are mapped.

   Requesting Files from Specific Environments
       The Salt fileserver supports multiple environments, allowing for SLS files and other files
       to be isolated for better organization.

       For the default backend (called roots), environments are defined using the  roots  option.
       Other  backends  (such  as  gitfs)  define  environments  in their own ways. For a list of
       available fileserver backends, see here.

   Querystring Syntax
       Any salt:// file URL can specify its fileserver environment using  a  querystring  syntax,
       like so:

          salt://path/to/file?saltenv=foo

       In  Reactor  configurations,  this  method  must be used to pull files from an environment
       other than base.

   In States
       Minions can be instructed which environment to use both globally, and for a single  state,
       and multiple methods for each are available:

   Globally
       A minion can be pinned to an environment using the environment option in the minion config
       file.

       Additionally, the environment can be set for a single call to the following functions:

       · state.apply

       · state.highstate

       · state.sls

       · state.top

       NOTE:
          When the saltenv parameter is used to trigger a highstate using either  state.apply  or
          state.highstate, only states from that environment will be applied.

   On a Per-State Basis
       Within  an  individual state, there are two ways of specifying the environment.  The first
       is to add a saltenv argument to the state. This example will pull the file from the config
       environment:

          /etc/foo/bar.conf:
            file.managed:
              - source: salt://foo/bar.conf
              - user: foo
              - mode: 600
              - saltenv: config

       Another way of doing the same thing is to use the querystring syntax described above:

          /etc/foo/bar.conf:
            file.managed:
              - source: salt://foo/bar.conf?saltenv=config
              - user: foo
              - mode: 600

       NOTE:
          Specifying the environment using either of the above methods is only necessary in cases
          where a state from one environment needs to access files from another  environment.  If
          the SLS file containing this state was in the config environment, then it would look in
          that environment by default.

   File Server Configuration
       The Salt file server is a high performance file server written in ZeroMQ. It manages large
       files quickly and with little overhead, and has been optimized to handle small files in an
       extremely efficient manner.

       The Salt file server is an environment aware file server. This means  that  files  can  be
       allocated  within  many root directories and accessed by specifying both the file path and
       the environment to search. The individual environments can span across multiple  directory
       roots to create overlays and to allow for files to be organized in many flexible ways.

   Environments
       The  Salt file server defaults to the mandatory base environment. This environment MUST be
       defined and is used to download files when no environment is specified.

       Environments allow for files and sls data to be logically separated, but environments  are
       not  isolated  from  each  other. This allows for logical isolation of environments by the
       engineer using Salt, but also allows for information to be used in multiple environments.

   Directory Overlay
       The environment setting is a list of directories to publish files from.  These directories
       are searched in order to find the specified file and the first file found is returned.

       This means that directory data is prioritized based on the order in which they are listed.
       In the case of this file_roots configuration:

          file_roots:
            base:
              - /srv/salt/base
              - /srv/salt/failover

       If a file’s URI  is  salt://httpd/httpd.conf,  it  will  first  search  for  the  file  at
       /srv/salt/base/httpd/httpd.conf.  If  the  file is found there it will be returned. If the
       file is not found there, then /srv/salt/failover/httpd/httpd.conf will  be  used  for  the
       source.

       This  allows  for  directories  to be overlaid and prioritized based on the order they are
       defined in the configuration.

       It is also possible to have file_roots which supports multiple environments:

          file_roots:
            base:
              - /srv/salt/base
            dev:
              - /srv/salt/dev
              - /srv/salt/base
            prod:
              - /srv/salt/prod
              - /srv/salt/base

       This example ensures that each environment will check the associated environment directory
       for  files  first.  If  a  file is not found in the appropriate directory, the system will
       default to using the base directory.

   Local File Server
       New in version 0.9.8.

       The file server can be rerouted to run from  the  minion.  This  is  primarily  to  enable
       running  Salt  states  without a Salt master. To use the local file server interface, copy
       the file server data to the minion and set the file_roots option on the minion to point to
       the  directories  copied from the master.  Once the minion file_roots option has been set,
       change the file_client option to local to make sure that the local file  server  interface
       is used.

   The cp Module
       The  cp module is the home of minion side file server operations. The cp module is used by
       the Salt state system, salt-cp, and can be used to distribute files presented by the  Salt
       file server.

   Escaping Special Characters
       The   salt://   url   format   can   potentially  contain  a  query  string,  for  example
       salt://dir/file.txt?saltenv=base.  You  can   prevent   the   fileclient/fileserver   from
       interpreting  ?  as  the  initial  token  of  a  query string by referencing the file with
       salt://| rather than salt://.

          /etc/marathon/conf/?checkpoint:
            file.managed:
              - source: salt://|hw/config/?checkpoint
              - makedirs: True

   Environments
       Since the  file  server  is  made  to  work  with  the  Salt  state  system,  it  supports
       environments.  The environments are defined in the master config file and when referencing
       an environment the file specified will be based on the root directory of the environment.

   get_file
       The cp.get_file function can be used on the minion to download a file from the master, the
       syntax looks like this:

          # salt '*' cp.get_file salt://vimrc /etc/vimrc

       This will instruct all Salt minions to download the vimrc file and copy it to /etc/vimrc

       Template rendering can be enabled on both the source and destination file names like so:

          # salt '*' cp.get_file "salt://{{grains.os}}/vimrc" /etc/vimrc template=jinja

       This  example  would instruct all Salt minions to download the vimrc from a directory with
       the same name as their OS grain and copy it to /etc/vimrc

       For larger files, the cp.get_file module also supports gzip compression.  Because gzip  is
       CPU-intensive,  this  should only be used in scenarios where the compression ratio is very
       high (e.g. pretty-printed JSON or YAML files).

       To use compression, use the gzip named argument. Valid values are integers from  1  to  9,
       where  1  is the lightest compression and 9 the heaviest. In other words, 1 uses the least
       CPU on the master (and minion), while 9 uses the most.

          # salt '*' cp.get_file salt://vimrc /etc/vimrc gzip=5

       Finally, note that by default cp.get_file does not create new destination  directories  if
       they do not exist.  To change this, use the makedirs argument:

          # salt '*' cp.get_file salt://vimrc /etc/vim/vimrc makedirs=True

       In this example, /etc/vim/ would be created if it didn’t already exist.

   get_dir
       The cp.get_dir function can be used on the minion to download an entire directory from the
       master.  The syntax is very similar to get_file:

          # salt '*' cp.get_dir salt://etc/apache2 /etc

       cp.get_dir supports template rendering and gzip compression arguments just like get_file:

          # salt '*' cp.get_dir salt://etc/{{pillar.webserver}} /etc gzip=5 template=jinja

   File Server Client Instance
       A client instance is available which allows for modules and  applications  to  be  written
       which make use of the Salt file server.

       The  file  server uses the same authentication and encryption used by the rest of the Salt
       system for network communication.

   fileclient Module
       The salt/fileclient.py module is used to set up the communication from the minion  to  the
       master.  When  creating  a  client  instance  using  the  fileclient  module,  the  minion
       configuration needs to be passed in. When using the fileclient module from within a minion
       module the built in __opts__ data can be passed:

          import salt.minion
          import salt.fileclient

          def get_file(path, dest, saltenv='base'):
              '''
              Used to get a single file from the Salt master

              CLI Example:
              salt '*' cp.get_file salt://vimrc /etc/vimrc
              '''
              # Get the fileclient object
              client = salt.fileclient.get_file_client(__opts__)
              # Call get_file
              return client.get_file(path, dest, False, saltenv)

       Creating  a  fileclient instance outside of a minion module where the __opts__ data is not
       available, it needs to be generated:

          import salt.fileclient
          import salt.config

          def get_file(path, dest, saltenv='base'):
              '''
              Used to get a single file from the Salt master
              '''
              # Get the configuration data
              opts = salt.config.minion_config('/etc/salt/minion')
              # Get the fileclient object
              client = salt.fileclient.get_file_client(opts)
              # Call get_file
              return client.get_file(path, dest, False, saltenv)

   Git Fileserver Backend Walkthrough
       NOTE:
          This walkthrough assumes basic knowledge of Salt. To get up to  speed,  check  out  the
          Salt Walkthrough.

       The  gitfs  backend allows Salt to serve files from git repositories. It can be enabled by
       adding git to the fileserver_backend list, and configuring one  or  more  repositories  in
       gitfs_remotes.

       Branches and tags become Salt fileserver environments.

       NOTE:
          Branching  and  tagging  can  result in a lot of potentially-conflicting top files, for
          this reason it may be useful to set top_file_merging_strategy to same in  the  minions’
          config files if the top files are being managed in a GitFS repo.

   Installing Dependencies
       Both  pygit2  and GitPython are supported Python interfaces to git. If compatible versions
       of both are installed, pygit2 will be preferred. In these cases, GitPython can  be  forced
       using the gitfs_provider parameter in the master config file.

       NOTE:
          It  is recommended to always run the most recent version of any the below dependencies.
          Certain features of GitFS may not be available without the most recent version  of  the
          chosen library.

   pygit2
       The minimum supported version of pygit2 is 0.20.3. Availability for this version of pygit2
       is still limited, though  the  SaltStack  team  is  working  to  get  compatible  versions
       available for as many platforms as possible.

       For  the  Fedora/EPEL  versions  which  have  a new enough version packaged, the following
       command would be used to install pygit2:

          # yum install python-pygit2

       Provided a valid version is packaged for Debian/Ubuntu (which is not currently the  case),
       the package name would be the same, and the following command would be used to install it:

          # apt-get install python-pygit2

       If  pygit2  is  not  packaged  for the platform on which the Master is running, the pygit2
       website has installation instructions here. Keep in  mind  however  that  following  these
       instructions  will  install libgit2 and pygit2 without system packages. Additionally, keep
       in mind that SSH authentication  in  pygit2  requires  libssh2  (not  libssh)  development
       libraries  to  be present before libgit2 is built. On some Debian-based distros pkg-config
       is also required to link libgit2 with libssh2.

       NOTE:
          If you are receiving the error “Unsupported URL Protocol” in the Salt Master  log  when
          making a connection using SSH, review the libssh2 details listed above.

       Additionally,  version  0.21.0  of pygit2 introduced a dependency on python-cffi, which in
       turn depends on newer releases of libffi. Upgrading libffi is  not  advisable  as  several
       other  applications depend on it, so on older LTS linux releases pygit2 0.20.3 and libgit2
       0.20.0 is the recommended combination.

       WARNING:
          pygit2 is actively developed and frequently makes non-backwards-compatible API changes,
          even  in  minor releases. It is not uncommon for pygit2 upgrades to result in errors in
          Salt. Please take care when upgrading pygit2, and pay close attention to the changelog,
          keeping  an  eye  out  for  API  changes. Errors can be reported on the SaltStack issue
          tracker.

   RedHat Pygit2 Issues
       The release of RedHat/CentOS 7.3 upgraded both python-cffi and http-parser, both of  which
       are  dependencies for pygit2/libgit2. Both pygit2 and libgit2 packages (which are from the
       EPEL repository) should be upgraded to the most recent versions, at least to 0.24.2.

       The below errors will show up in the master log if an incompatible  python-pygit2  package
       is installed:

          2017-02-10 09:07:34,892 [salt.utils.gitfs ][ERROR ][11211] Import pygit2 failed: CompileError: command 'gcc' failed with exit status 1
          2017-02-10 09:07:34,907 [salt.utils.gitfs ][ERROR ][11211] gitfs is configured but could not be loaded, are pygit2 and libgit2 installed?
          2017-02-10 09:07:34,907 [salt.utils.gitfs ][CRITICAL][11211] No suitable gitfs provider module is installed.
          2017-02-10 09:07:34,912 [salt.master ][CRITICAL][11211] Master failed pre flight checks, exiting

       The  below  errors  will  show  up in the master log if an incompatible libgit2 package is
       installed:

          2017-02-15 18:04:45,211 [salt.utils.gitfs ][ERROR   ][6211] Error occurred fetching gitfs remote 'https://foo.com/bar.git': No Content-Type header in response

       A restart of the salt-master daemon and gitfs cache directory clean up may be required  to
       allow http(s) repositories to continue to be fetched.

   GitPython
       GitPython  0.3.0  or  newer  is  required to use GitPython for gitfs. For RHEL-based Linux
       distros, a compatible version is available in EPEL, and can be  easily  installed  on  the
       master using yum:

          # yum install GitPython

       Ubuntu 14.04 LTS and Debian Wheezy (7.x) also have a compatible version packaged:

          # apt-get install python-git

       GitPython  requires  the git CLI utility to work. If installed from a system package, then
       git should already be installed, but if installed via pip then it may still  be  necessary
       to  install  git  separately.  For  MacOS  users, GitPython comes bundled in with the Salt
       installer, but git must still be installed for it to work properly. Git can  be  installed
       in several ways, including by installing XCode.

       WARNING:
          Keep  in  mind  that if GitPython has been previously installed on the master using pip
          (even if it was subsequently uninstalled), then it may still exist in the  build  cache
          (typically   /tmp/pip-build-root/GitPython)   if   the   cache  is  not  cleared  after
          installation. The package in the build cache will override any requirement  specifiers,
          so   if   you   try   upgrading   to   version   0.3.2.RC1   by   running  pip  install
          'GitPython==0.3.2.RC1' then it will ignore this and simply install the version from the
          cache directory.  Therefore, it may be necessary to delete the GitPython directory from
          the build cache in order to ensure that the specified version is installed.

       WARNING:
          GitPython 2.0.9 and newer is not compatible with Python 2.6.  If  installing  GitPython
          using  pip on a machine running Python 2.6, make sure that a version earlier than 2.0.9
          is installed. This can be done on the CLI by running pip install 'GitPython<2.0.9',  or
          in a pip.installed state using the following SLS:

              GitPython:
                pip.installed:
                  - name: 'GitPython < 2.0.9'

   Simple Configuration
       To use the gitfs backend, only two configuration changes are required on the master:

       1. Include gitfs in the fileserver_backend list in the master config file:

             fileserver_backend:
               - gitfs

          NOTE:
             git also works here. Prior to the 2018.3.0 release, only git would work.

       2. Specify  one  or  more  git://,  https://,  file://, or ssh:// URLs in gitfs_remotes to
          configure which repositories to cache and search for requested files:

             gitfs_remotes:
               - https://github.com/saltstack-formulas/salt-formula.git

          SSH remotes can also be configured using scp-like syntax:

             gitfs_remotes:
               - git@github.com:user/repo.git
               - ssh://user@domain.tld/path/to/repo.git

          Information on how to authenticate to SSH remotes can be found here.

       3. Restart the master to load the new configuration.

       NOTE:
          In a master/minion setup, files from a gitfs remote are cached once by the  master,  so
          minions do not need direct access to the git repository.

   Multiple Remotes
       The  gitfs_remotes  option  accepts an ordered list of git remotes to cache and search, in
       listed order, for requested files.

       A simple scenario illustrates this cascading lookup behavior:

       If the gitfs_remotes option specifies three remotes:

          gitfs_remotes:
            - git://github.com/example/first.git
            - https://github.com/example/second.git
            - file:///root/third

       And each repository contains some files:

          first.git:
              top.sls
              edit/vim.sls
              edit/vimrc
              nginx/init.sls

          second.git:
              edit/dev_vimrc
              haproxy/init.sls

          third:
              haproxy/haproxy.conf
              edit/dev_vimrc

       Salt will attempt to lookup the requested file from each gitfs remote  repository  in  the
       order     in     which     they     are     defined     in    the    configuration.    The
       git://github.com/example/first.git remote will be searched first.  If the  requested  file
       is found, then it is served and no further searching is executed. For example:

       · A   request   for   the   file   salt://haproxy/init.sls   will   be   served  from  the
         https://github.com/example/second.git git repo.

       · A  request  for  the  file  salt://haproxy/haproxy.conf  will   be   served   from   the
         file:///root/third repo.

       NOTE:
          This example is purposefully contrived to illustrate the behavior of the gitfs backend.
          This example should not be read as a recommended way to lay out files and git repos.

          The file:// prefix denotes a git repository in a local  directory.   However,  it  will
          still  use  the  given file:// URL as a remote, rather than copying the git repo to the
          salt cache.  This means that any refs you want accessible must exist as local  refs  in
          the specified repo.

       WARNING:
          Salt  versions  prior  to 2014.1.0 are not tolerant of changing the order of remotes or
          modifying the URI of existing remotes. In those versions, when modifying remotes it  is
          a  good  idea to remove the gitfs cache directory (/var/cache/salt/master/gitfs) before
          restarting the salt-master service.

   Per-remote Configuration Parameters
       New in version 2014.7.0.

       The following master config parameters are global (that is, they apply to  all  configured
       gitfs remotes):

       · gitfs_base

       · gitfs_root

       · gitfs_ssl_verify

       · gitfs_mountpoint (new in 2014.7.0)

       · gitfs_user (pygit2 only, new in 2014.7.0)

       · gitfs_password (pygit2 only, new in 2014.7.0)

       · gitfs_insecure_auth (pygit2 only, new in 2014.7.0)

       · gitfs_pubkey (pygit2 only, new in 2014.7.0)

       · gitfs_privkey (pygit2 only, new in 2014.7.0)

       · gitfs_passphrase (pygit2 only, new in 2014.7.0)

       · gitfs_refspecs (new in 2017.7.0)

       · gitfs_disable_saltenv_mapping (new in 2018.3.0)

       · gitfs_ref_types (new in 2018.3.0)

       · gitfs_update_interval (new in 2018.3.0)

       NOTE:
          pygit2 only supports disabling SSL verification in versions 0.23.2 and newer.

       These parameters can now be overridden on a per-remote basis. This allows for a tremendous
       amount of customization. Here’s some example usage:

          gitfs_provider: pygit2
          gitfs_base: develop

          gitfs_remotes:
            - https://foo.com/foo.git
            - https://foo.com/bar.git:
              - root: salt
              - mountpoint: salt://bar
              - base: salt-base
              - ssl_verify: False
              - update_interval: 120
            - https://foo.com/bar.git:
              - name: second_bar_repo
              - root: other/salt
              - mountpoint: salt://other/bar
              - base: salt-base
              - ref_types:
                - branch
            - http://foo.com/baz.git:
              - root: salt/states
              - user: joe
              - password: mysupersecretpassword
              - insecure_auth: True
              - disable_saltenv_mapping: True
              - saltenv:
                - foo:
                  - ref: foo
            - http://foo.com/quux.git:
              - all_saltenvs: master

       IMPORTANT:
          There  are  two  important  distinctions  which  should   be   noted   for   per-remote
          configuration:

          1. The  URL  of  a  remote  which  has per-remote configuration must be suffixed with a
             colon.

          2. Per-remote configuration parameters are named like the  global  versions,  with  the
             gitfs_  removed  from  the  beginning.  The  exception  being the name, saltenv, and
             all_saltenvs parameters, which are only available to per-remote configurations.

          The all_saltenvs parameter is new in the 2018.3.0 release.

       In the example configuration above, the following is true:

       1. The first and fourth gitfs  remotes  will  use  the  develop  branch/tag  as  the  base
          environment,  while  the second and third will use the salt-base branch/tag as the base
          environment.

       2. The first remote will serve all files in the repository. The second  remote  will  only
          serve  files  from  the  salt directory (and its subdirectories). The third remote will
          only server files from the other/salt directory (and  its  subdirectories),  while  the
          fourth   remote  will  only  serve  files  from  the  salt/states  directory  (and  its
          subdirectories).

       3. The third remote will only serve files from branches, and not from tags or SHAs.

       4. The fourth remote will only have two saltenvs available: base (pointed at develop), and
          foo (pointed at foo).

       5. The  first  and  fourth  remotes  will  have  files  located under the root of the Salt
          fileserver namespace (salt://). The files from the second remote will be located  under
          salt://bar,   while   the   files   from   the  third  remote  will  be  located  under
          salt://other/bar.

       6. The second and third remotes reference the same repository and unique names need to  be
          declared for duplicate gitfs remotes.

       7. The  fourth  remote  overrides  the  default behavior of not authenticating to insecure
          (non-HTTPS) remotes.

       8. Because all_saltenvs is configured for the fifth  remote,  files  from  the  branch/tag
          master will appear in every fileserver environment.

          NOTE:
             The   use   of  http://  (instead  of  https://)  is  permitted  here  only  because
             authentication is not being used. Otherwise, the  insecure_auth  parameter  must  be
             used (as in the fourth remote) to force Salt to authenticate to an http:// remote.

       9. The first remote will wait 120 seconds between updates instead of 60.

   Per-Saltenv Configuration Parameters
       New in version 2016.11.0.

       For  more  granular  control,  Salt allows the following three things to be overridden for
       individual saltenvs within a given repo:

       · The mountpoint

       · The root

       · The branch/tag to be used for a given saltenv

       Here is an example:

          gitfs_root: salt

          gitfs_saltenv:
            - dev:
              - mountpoint: salt://gitfs-dev
              - ref: develop

          gitfs_remotes:
            - https://foo.com/bar.git:
              - saltenv:
                - staging:
                  - ref: qa
                  - mountpoint: salt://bar-staging
                - dev:
                  - ref: development
            - https://foo.com/baz.git:
              - saltenv:
                - staging:
                  - mountpoint: salt://baz-staging

       Given the above configuration, the following is true:

       1. For  all  gitfs  remotes,  files  for  the  dev   saltenv   will   be   located   under
          salt://gitfs-dev.

       2. For  the  dev saltenv, files from the first remote will be sourced from the development
          branch, while files from the second remote will be sourced from the develop branch.

       3. For  the  staging  saltenv,  files  from  the  first  remote  will  be  located   under
          salt://bar-staging,   while  files  from  the  second  remote  will  be  located  under
          salt://baz-staging.

       4. For all gitfs remotes, and in  all  saltenvs,  files  will  be  served  from  the  salt
          directory (and its subdirectories).

   Custom Refspecs
       New in version 2017.7.0.

       GitFS  will by default fetch remote branches and tags. However, sometimes it can be useful
       to fetch custom refs (such as those created for  GitHub  pull  requests).  To  change  the
       refspecs GitFS fetches, use the gitfs_refspecs config option:

          gitfs_refspecs:
            - '+refs/heads/*:refs/remotes/origin/*'
            - '+refs/tags/*:refs/tags/*'
            - '+refs/pull/*/head:refs/remotes/origin/pr/*'
            - '+refs/pull/*/merge:refs/remotes/origin/merge/*'

       In  the  above  example, in addition to fetching remote branches and tags, GitHub’s custom
       refs for pull requests and merged pull requests will also be fetched. These  special  head
       refs represent the head of the branch which is requesting to be merged, and the merge refs
       represent the result of the base branch after the merge.

       IMPORTANT:
          When using custom  refspecs,  the  destination  of  the  fetched  refs  must  be  under
          refs/remotes/origin/,  preferably  in  a  subdirectory like in the example above. These
          custom refspecs will map as environment names  using  their  relative  path  underneath
          refs/remotes/origin/.  For  example,  assuming the configuration above, the head branch
          for pull request 12345 would map to fileserver environment pr/12345 (slash included).

       Refspecs can be configured on a per-remote basis. For  example,  the  below  configuration
       would  only alter the default refspecs for the second GitFS remote. The first remote would
       only fetch branches and tags (the default).

          gitfs_remotes:
            - https://domain.tld/foo.git
            - https://domain.tld/bar.git:
              - refspecs:
                - '+refs/heads/*:refs/remotes/origin/*'
                - '+refs/tags/*:refs/tags/*'
                - '+refs/pull/*/head:refs/remotes/origin/pr/*'
                - '+refs/pull/*/merge:refs/remotes/origin/merge/*'

   Global Remotes
       New in version 2018.3.0.

       The all_saltenvs per-remote configuration parameter overrides the logic Salt uses  to  map
       branches/tags  to fileserver environments (i.e. saltenvs). This allows a single branch/tag
       to appear in all saltenvs.

       This is very useful in particular when working with salt formulas. Prior to  the  addition
       of this feature, it was necessary to push a branch/tag to the remote repo for each saltenv
       in which that formula was to be used. If the formula needed to  be  updated,  this  update
       would  need  to  be reflected in all of the other branches/tags. This is both inconvenient
       and not scalable.

       With all_saltenvs, it is now possible to define your formula once, in a single branch.

          gitfs_remotes:
            - http://foo.com/quux.git:
              - all_saltenvs: anything

   Update Intervals
       Prior to the 2018.3.0 release, GitFS would update its fileserver backends  as  part  of  a
       dedicated   “maintenance”  process,  in  which  various  routine  maintenance  tasks  were
       performed. This tied the update interval to the  loop_interval  config  option,  and  also
       forced all fileservers to update at the same interval.

       Now it is possible to make GitFS update at its own interval, using gitfs_update_interval:

          gitfs_update_interval: 180

          gitfs_remotes:
            - https://foo.com/foo.git
            - https://foo.com/bar.git:
              - update_interval: 120

       Using  the  above  configuration, the first remote would update every three minutes, while
       the second remote would update every two minutes.

   Configuration Order of Precedence
       The order of precedence for GitFS configuration is as follows (each  level  overrides  all
       levels below it):

       1. Per-saltenv configuration (defined under a per-remote saltenv param)

             gitfs_remotes:
               - https://foo.com/bar.git:
                 - saltenv:
                   - dev:
                     - mountpoint: salt://bar

       2. Global per-saltenv configuration (defined in gitfs_saltenv)

             gitfs_saltenv:
               - dev:
                 - mountpoint: salt://bar

       3. Per-remote configuration parameter

             gitfs_remotes:
               - https://foo.com/bar.git:
                 - mountpoint: salt://bar

       4. Global configuration parameter

             gitfs_mountpoint: salt://bar

       NOTE:
          The  one  exception to the above is when all_saltenvs is used. This value overrides all
          logic for mapping branches/tags to fileserver environments. So, even  if  gitfs_saltenv
          is  used  to globally override the mapping for a given saltenv, all_saltenvs would take
          precedence for any remote which uses it.

          It’s important to note however that  any  root  and  mountpoint  values  configured  in
          gitfs_saltenv (or per-saltenv configuration) would be unaffected by this.

   Serving from a Subdirectory
       The  gitfs_root  parameter  allows  files  to  be  served  from  a subdirectory within the
       repository. This allows for  only  part  of  a  repository  to  be  exposed  to  the  Salt
       fileserver.

       Assume the below layout:

          .gitignore
          README.txt
          foo/
          foo/bar/
          foo/bar/one.txt
          foo/bar/two.txt
          foo/bar/three.txt
          foo/baz/
          foo/baz/top.sls
          foo/baz/edit/vim.sls
          foo/baz/edit/vimrc
          foo/baz/nginx/init.sls

       The below configuration would serve only the files under foo/baz, ignoring the other files
       in the repository:

          gitfs_remotes:
            - git://mydomain.com/stuff.git

          gitfs_root: foo/baz

       The root can also be configured on a per-remote basis.

   Mountpoints
       New in version 2014.7.0.

       The gitfs_mountpoint parameter will prepend the specified path to the  files  served  from
       gitfs.  This allows an existing repository to be used, rather than needing to reorganize a
       repository or design it around the layout of the Salt fileserver.

       Before the addition of this feature, if a file being served up via gitfs was deeply nested
       within  the  root  directory  (for example, salt://webapps/foo/files/foo.conf, it would be
       necessary to ensure that the file was properly located in the remote repository, and  that
       all   of   the   parent   directories   were   present   (for   example,  the  directories
       webapps/foo/files/ would need to exist at the root of the repository).

       The below example would allow for a file foo.conf at the root  of  the  repository  to  be
       served up from the Salt fileserver path salt://webapps/foo/files/foo.conf.

          gitfs_remotes:
            - https://mydomain.com/stuff.git

          gitfs_mountpoint: salt://webapps/foo/files

       Mountpoints can also be configured on a per-remote basis.

   Using gitfs in Masterless Mode
       Since  2014.7.0,  gitfs  can  be  used  in masterless mode. To do so, simply add the gitfs
       configuration parameters (and set fileserver_backend) in the _minion_ config file  instead
       of the master config file.

   Using gitfs Alongside Other Backends
       Sometimes  it  may  make  sense  to  use multiple backends; for instance, if sls files are
       stored in git but larger files are stored directly on the master.

       The cascading lookup logic used for multiple remotes is also used with multiple  backends.
       If the fileserver_backend option contains multiple backends:

          fileserver_backend:
            - roots
            - git

       Then  the roots backend (the default backend of files in /srv/salt) will be searched first
       for the requested file; then, if it is not found on the master, each configured git remote
       will be searched.

   Branches, Environments, and Top Files
       When  using the GitFS backend, branches, and tags will be mapped to environments using the
       branch/tag name as an identifier.

       There is one exception to this rule: the master branch is implicitly mapped  to  the  base
       environment.

       So, for a typical base, qa, dev setup, the following branches could be used:

          master
          qa
          dev

       top.sls  files from different branches will be merged into one at runtime.  Since this can
       lead to overly complex configurations,  the  recommended  setup  is  to  have  a  separate
       repository, containing only the top.sls file with just one single master branch.

       To map a branch other than master as the base environment, use the gitfs_base parameter.

          gitfs_base: salt-base

       The base can also be configured on a per-remote basis.

   Environment Whitelist/Blacklist
       New in version 2014.7.0.

       The  gitfs_saltenv_whitelist  and  gitfs_saltenv_blacklist  parameters  allow  for greater
       control over which branches/tags are exposed as fileserver  environments.  Exact  matches,
       globs, and regular expressions are supported, and are evaluated in that order.  If using a
       regular expression, ^ and $ must be omitted, and the  expression  must  match  the  entire
       branch/tag.

          gitfs_saltenv_whitelist:
            - base
            - v1.*
            - 'mybranch\d+'

       NOTE:
          v1.*,  in  this  example, will match as both a glob and a regular expression (though it
          will  have  been  matched  as  a  glob,  since  globs  are  evaluated  before   regular
          expressions).

       The  behavior of the blacklist/whitelist will differ depending on which combination of the
       two options is used:

       · If only gitfs_saltenv_whitelist  is  used,  then  only  branches/tags  which  match  the
         whitelist will be available as environments

       · If  only  gitfs_saltenv_blacklist  is  used,  then  the  branches/tags  which  match the
         blacklist will not be available as environments

       · If both are used, then the branches/tags which match the whitelist, but do not match the
         blacklist, will be available as environments.

   Authentication
   pygit2
       New in version 2014.7.0.

       Both  HTTPS  and  SSH  authentication  are  supported  as  of version 0.20.3, which is the
       earliest version of pygit2 supported by Salt for gitfs.

       NOTE:
          The examples below make use of per-remote configuration parameters, a  feature  new  to
          Salt 2014.7.0. More information on these can be found here.

   HTTPS
       For  HTTPS  repositories  which  require  authentication, the username and password can be
       provided like so:

          gitfs_remotes:
            - https://domain.tld/myrepo.git:
              - user: git
              - password: mypassword

       If the repository is served over HTTP instead of HTTPS, then Salt will by  default  refuse
       to  authenticate  to  it.  This  behavior  can  be  overridden  by adding an insecure_auth
       parameter:

          gitfs_remotes:
            - http://domain.tld/insecure_repo.git:
              - user: git
              - password: mypassword
              - insecure_auth: True

   SSH
       SSH repositories can be  configured  using  the  ssh://  protocol  designation,  or  using
       scp-like syntax. So, the following two configurations are equivalent:

       · ssh://git@github.com/user/repo.git

       · git@github.com:user/repo.git

       Both  gitfs_pubkey and gitfs_privkey (or their per-remote counterparts) must be configured
       in order to authenticate to SSH-based repos. If  the  private  key  is  protected  with  a
       passphrase,  it  can  be  configured using gitfs_passphrase (or simply passphrase if being
       configured per-remote). For example:

          gitfs_remotes:
            - git@github.com:user/repo.git:
              - pubkey: /root/.ssh/id_rsa.pub
              - privkey: /root/.ssh/id_rsa
              - passphrase: myawesomepassphrase

       Finally, the SSH host key must be added to the known_hosts file.

       NOTE:
          There is a known issue with public-key SSH authentication to  Microsoft  Visual  Studio
          (VSTS)  with  pygit2. This is due to a bug or lack of support for VSTS in older libssh2
          releases.  Known  working  releases  include  libssh2  1.7.0  and  later,   and   known
          incompatible  releases include 1.5.0 and older.  At the time of this writing, 1.6.0 has
          not been tested.

          Since upgrading libssh2 would require rebuilding  many  other  packages  (curl,  etc.),
          followed  by  a  rebuild of libgit2 and a reinstall of pygit2, an easier workaround for
          systems with  older  libssh2  is  to  use  GitPython  with  a  passphraseless  key  for
          authentication.

   GitPython
   HTTPS
       For  HTTPS  repositories  which  require  authentication, the username and password can be
       configured in one of two ways. The first way is to include  them  in  the  URL  using  the
       format https://<user>:<password>@<url>, like so:

          gitfs_remotes:
            - https://git:mypassword@domain.tld/myrepo.git

       The other way would be to configure the authentication in ~/.netrc:

          machine domain.tld
          login git
          password mypassword

       If  the  repository is served over HTTP instead of HTTPS, then Salt will by default refuse
       to authenticate to it.  This  behavior  can  be  overridden  by  adding  an  insecure_auth
       parameter:

          gitfs_remotes:
            - http://git:mypassword@domain.tld/insecure_repo.git:
              - insecure_auth: True

   SSH
       Only  passphrase-less SSH public key authentication is supported using GitPython. The auth
       parameters (pubkey, privkey, etc.) shown in the pygit2 authentication  examples  above  do
       not work with GitPython.

          gitfs_remotes:
            - ssh://git@github.com/example/salt-states.git

       Since  GitPython  wraps  the git CLI, the private key must be located in ~/.ssh/id_rsa for
       the user under which the Master is running, and should have permissions of 0600. Also,  in
       the  absence of a user in the repo URL, GitPython will (just as SSH does) attempt to login
       as the current user (in other words, the user under which the Master is  running,  usually
       root).

       If  a  key  needs to be used, then ~/.ssh/config can be configured to use the desired key.
       Information on how to do this can be found by viewing the manpage for  ssh_config.  Here’s
       an  example  entry  which  can  be  added to the ~/.ssh/config to use an alternate key for
       gitfs:

          Host github.com
              IdentityFile /root/.ssh/id_rsa_gitfs

       The Host parameter should be a hostname (or hostname glob) that matches the domain name of
       the git repository.

       It  is  also  necessary  to add the SSH host key to the known_hosts file. The exception to
       this would be if strict host key checking  is  disabled,  which  can  be  done  by  adding
       StrictHostKeyChecking no to the entry in ~/.ssh/config

          Host github.com
              IdentityFile /root/.ssh/id_rsa_gitfs
              StrictHostKeyChecking no

       However, this is generally regarded as insecure, and is not recommended.

   Adding the SSH Host Key to the known_hosts File
       To use SSH authentication, it is necessary to have the remote repository’s SSH host key in
       the ~/.ssh/known_hosts file. If the master is also a minion, this can be  done  using  the
       ssh.set_known_host function:

          # salt mymaster ssh.set_known_host user=root hostname=github.com
          mymaster:
              ----------
              new:
                  ----------
                  enc:
                      ssh-rsa
                  fingerprint:
                      16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48
                  hostname:
                      |1|OiefWWqOD4kwO3BhoIGa0loR5AA=|BIXVtmcTbPER+68HvXmceodDcfI=
                  key:
                      AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
              old:
                  None
              status:
                  updated

       If  not,  then  the  easiest  way to add the key is to su to the user (usually root) under
       which the salt-master runs and attempt to login to the server via SSH:

          $ su -
          Password:
          # ssh github.com
          The authenticity of host 'github.com (192.30.252.128)' can't be established.
          RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
          Are you sure you want to continue connecting (yes/no)? yes
          Warning: Permanently added 'github.com,192.30.252.128' (RSA) to the list of known hosts.
          Permission denied (publickey).

       It doesn’t matter if the login was successful, as answering yes will write the fingerprint
       to the known_hosts file.

   Verifying the Fingerprint
       To verify that the correct fingerprint was added, it is a good idea to look it up. One way
       to do this is to use nmap:

          $ nmap -p 22 github.com --script ssh-hostkey

          Starting Nmap 5.51 ( http://nmap.org ) at 2014-08-18 17:47 CDT
          Nmap scan report for github.com (192.30.252.129)
          Host is up (0.17s latency).
          Not shown: 996 filtered ports
          PORT     STATE SERVICE
          22/tcp   open  ssh
          | ssh-hostkey: 1024 ad:1c:08:a4:40:e3:6f:9c:f5:66:26:5d:4b:33:5d:8c (DSA)
          |_2048 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 (RSA)
          80/tcp   open  http
          443/tcp  open  https
          9418/tcp open  git

          Nmap done: 1 IP address (1 host up) scanned in 28.78 seconds

       Another way is to check one’s own known_hosts file, using this one-liner:

          $ ssh-keygen -l -f /dev/stdin <<<`ssh-keyscan github.com 2>/dev/null` | awk '{print $2}'
          16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48

       WARNING:
          AWS tracks usage of nmap and may flag it as abuse. On AWS hosts, the ssh-keygen  method
          is recommended for host key verification.

       NOTE:
          As  of OpenSSH 6.8 the SSH fingerprint is now shown as a base64-encoded SHA256 checksum
          of   the   host   key.    So,    instead    of    the    fingerprint    looking    like
          16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48,      it      would      look      like
          SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8.

   Refreshing gitfs Upon Push
       By default, Salt updates the remote fileserver backends every 60 seconds.  However, if  it
       is  desirable  to  refresh quicker than that, the Reactor System can be used to signal the
       master to update the fileserver on each push, provided that the git server is also a  Salt
       minion. There are three steps to this process:

       1. On  the  master,  create  a file /srv/reactor/update_fileserver.sls, with the following
          contents:

             update_fileserver:
               runner.fileserver.update

       2. Add the following reactor configuration to the master config file:

             reactor:
               - 'salt/fileserver/gitfs/update':
                 - /srv/reactor/update_fileserver.sls

       3. On the git server, add a post-receive hook

          a. If the user executing git push is the same as the minion  user,  use  the  following
             hook:

                 #!/usr/bin/env sh
                 salt-call event.fire_master update salt/fileserver/gitfs/update

          b. To enable other git users to run the hook after a push, use sudo in the hook script:

                 #!/usr/bin/env sh
                 sudo -u root salt-call event.fire_master update salt/fileserver/gitfs/update

       4. If  using  sudo in the git hook (above), the policy must be changed to permit all users
          to fire the event.  Add the following policy to the sudoers file on the git server.

             Cmnd_Alias SALT_GIT_HOOK = /bin/salt-call event.fire_master update salt/fileserver/gitfs/update
             Defaults!SALT_GIT_HOOK !requiretty
             ALL ALL=(root) NOPASSWD: SALT_GIT_HOOK

       The update argument right after event.fire_master in this example can really be  anything,
       as  it  represents  the  data being passed in the event, and the passed data is ignored by
       this reactor.

       Similarly, the tag name salt/fileserver/gitfs/update can be replaced by anything, so  long
       as the usage is consistent.

       The  root user name in the hook script and sudo policy should be changed to match the user
       under which the minion is running.

   Using Git as an External Pillar Source
       The git external pillar (a.k.a. git_pillar) has been rewritten for the  2015.8.0  release.
       This  rewrite  brings  with  it  pygit2  support  (allowing  for  access  to authenticated
       repositories), as well  as  more  granular  support  for  per-remote  configuration.  This
       configuration schema is detailed here.

   Why aren’t my custom modules/states/etc. syncing to my Minions?
       In  versions  0.16.3 and older, when using the git fileserver backend, certain versions of
       GitPython may generate errors when fetching, which Salt fails to catch. While not fatal to
       the  fetch  process,  these interrupt the fileserver update that takes place before custom
       types are synced, and thus interrupt the sync itself. Try  disabling  the  git  fileserver
       backend in the master config, restarting the master, and attempting the sync again.

       This issue is worked around in Salt 0.16.4 and newer.

   MinionFS Backend Walkthrough
       New in version 2014.1.0.

       NOTE:
          This walkthrough assumes basic knowledge of Salt and cp.push. To get up to speed, check
          out the Salt Walkthrough.

       Sometimes it is desirable to deploy a file located on one minion  to  one  or  more  other
       minions. This is supported in Salt, and can be accomplished in two parts:

       1. Minion support for pushing files to the master (using cp.push)

       2. The minionfs fileserver backend

       This walkthrough will show how to use both of these features.

   Enabling File Push
       To  set the master to accept files pushed from minions, the file_recv option in the master
       config file must be set to True (the default is False).

          file_recv: True

       NOTE:
          This change requires a restart of the salt-master service.

   Pushing Files
       Once this has been done, files can be pushed to the master using the cp.push function:

          salt 'minion-id' cp.push /path/to/the/file

       This command will store the file in  a  subdirectory  named  minions  under  the  master’s
       cachedir.  On  most masters, this path will be /var/cache/salt/master/minions. Within this
       directory will be one directory for each minion which has pushed a file to the master, and
       underneath that the full path to the file on the minion. So, for example, if a minion with
       an ID of dev1 pushed a file /var/log/myapp.log  to  the  master,  it  would  be  saved  to
       /var/cache/salt/master/minions/dev1/var/log/myapp.log.

   Serving Pushed Files Using MinionFS
       While  it  is  certainly  possible  to  add /var/cache/salt/master/minions to the master’s
       file_roots and serve these files, it may only be desirable to  expose  files  pushed  from
       certain  minions.  Adding  /var/cache/salt/master/minions/<minion-id> for each minion that
       needs to be exposed can be cumbersome and prone to errors.

       Enter minionfs. This fileserver backend will make files pushed using cp.push available  to
       the  Salt  fileserver,  and  provides  an easy mechanism to restrict which minions’ pushed
       files are made available.

   Simple Configuration
       To  use  the  minionfs  backend,  add  minionfs  to  the   list   of   backends   in   the
       fileserver_backend configuration option on the master:

          file_recv: True

          fileserver_backend:
            - roots
            - minionfs

       NOTE:
          minion also works here. Prior to the 2018.3.0 release, only minion would work.

          Also,  as  described earlier, file_recv: True is needed to enable the master to receive
          files pushed from minions. As always, changes to the  master  configuration  require  a
          restart of the salt-master service.

       Files     made     available     via     minionfs    are    by    default    located    at
       salt://<minion-id>/path/to/file. Think back to the earlier example, in which dev1 pushed a
       file  /var/log/myapp.log  to  the  master.  With  minionfs  enabled,  this  file  would be
       addressable in Salt at salt://dev1/var/log/myapp.log.

       If many minions have pushed to the master, this will result in  many  directories  in  the
       root   of   the   Salt  fileserver.  For  this  reason,  it  is  recommended  to  use  the
       minionfs_mountpoint config option to organize these files underneath a subdirectory:

          minionfs_mountpoint: salt://minionfs

       Using  the  above  mountpoint,  the  file   in   the   example   would   be   located   at
       salt://minionfs/dev1/var/log/myapp.log.

   Restricting Certain Minions’ Files from Being Available Via MinionFS
       A  whitelist  and  blacklist  can  be  used to restrict the minions whose pushed files are
       available via minionfs. These lists  can  be  managed  using  the  minionfs_whitelist  and
       minionfs_blacklist  config  options.  Click  the  links  for  both  of them for a detailed
       explanation of how to use them.

       A more complex configuration example, which uses both a whitelist and  blacklist,  can  be
       found below:

          file_recv: True

          fileserver_backend:
            - roots
            - minionfs

          minionfs_mountpoint: salt://minionfs

          minionfs_whitelist:
            - host04
            - web*
            - 'mail\d+\.domain\.tld'

          minionfs_blacklist:
            - web21

   Potential Concerns
       · There  is  no  access  control  in  place to restrict which minions have access to files
         served up by minionfs. All minions will have access to these files.

       · Unless the minionfs_whitelist and/or minionfs_blacklist config  options  are  used,  all
         minions  which  push  files  to  the  master  will  have  their files made available via
         minionfs.

   Salt Package Manager
       The Salt Package Manager, or SPM,  enables  Salt  formulas  to  be  packaged  to  simplify
       distribution to Salt masters. The design of SPM was influenced by other existing packaging
       systems including RPM, Yum, and Pacman.  [image]

       NOTE:
          The previous diagram shows each SPM component as a different system, but  this  is  not
          required. You can build packages and host the SPM repo on a single Salt master if you’d
          like.

       Packaging System

       The packaging system is used to package the state, pillar, file templates, and other files
       used  by your formula into a single file. After a formula package is created, it is copied
       to the Repository System where it is made available to Salt masters.

       See Building SPM Packages

       Repo System

       The Repo system stores the SPM package and metadata files and makes them available to Salt
       masters via http(s), ftp, or file URLs. SPM repositories can be hosted on a Salt Master, a
       Salt Minion, or on another system.

       See Distributing SPM Packages

       Salt Master

       SPM provides Salt master settings that let you configure the URL of one or more SPM repos.
       You  can  then  quickly install packages that contain entire formulas to your Salt masters
       using SPM.

       See Installing SPM Packages

       Contents

   Building SPM Packages
       The first step when using Salt Package Manager is to build packages for  each  of  of  the
       formulas  that  you  want to distribute. Packages can be built on any system where you can
       install Salt.

   Package Build Overview
       To build a package, all state, pillar, jinja, and file templates used by your formula  are
       assembled  into  a  folder  on  the  build  system.  These  files can be cloned from a Git
       repository, such as those found at  the  saltstack-formulas  organization  on  GitHub,  or
       copied directly to the folder.

       The following diagram demonstrates a typical formula layout on the build system: [image]

       In  this  example,  all  formula  files are placed in a myapp-formula folder.  This is the
       folder that is targeted by the spm build command when this package is built.

       Within this folder, pillar data is placed in a pillar.example file at the  root,  and  all
       state,  jinja,  and  template  files are placed within a subfolder that is named after the
       application being packaged. State  files  are  typically  contained  within  a  subfolder,
       similar  to  how state files are organized in the state tree. Any non-pillar files in your
       package that are not contained in a subfolder are placed at the  root  of  the  spm  state
       tree.

       Additionally,  a  FORMULA  file is created and placed in the root of the folder. This file
       contains package metadata that is used by SPM.

   Package Installation Overview
       When building packages, it is useful to know where files are installed on the Salt master.
       During  installation,  all  files except pillar.example and FORMULA are copied directly to
       the spm state tree on the Salt master (located at \srv\spm\salt).

       If a pillar.example file is present in the root, it is renamed to <formula  name>.sls.orig
       and placed in the pillar_path.  [image]

       NOTE:
          Even  though  the  pillar  data  file  is  copied to the pillar root, you still need to
          manually assign this pillar data to systems using the pillar top file.  This  file  can
          also  be duplicated and renamed so the .orig version is left intact in case you need to
          restore it later.

   Building an SPM Formula Package
       1. Assemble formula files in a folder on the build system.

       2. Create a FORMULA file and place it in the root of the package folder.

       3. Run spm build <folder name>. The package is built  and  placed  in  the  /srv/spm_build
          folder.

             spm build /path/to/salt-packages-source/myapp-formula

       4. Copy the .spm file to a folder on the repository system.

   Types of Packages
       SPM  supports  different types of packages. The function of each package is denoted by its
       name. For instance, packages which end in -formula are considered to be Salt  States  (the
       most  common  type of formula). Packages which end in -conf contain configuration which is
       to be placed in the /etc/salt/ directory. Packages which do not contain one of these names
       are treated as if they have a -formula name.

   formula
       By default, most files from this type of package live in the /srv/spm/salt/ directory. The
       exception is the pillar.example file, which will  be  renamed  to  <package_name>.sls  and
       placed in the pillar directory (/srv/spm/pillar/ by default).

   reactor
       By default, files from this type of package live in the /srv/spm/reactor/ directory.

   conf
       The files in this type of package are configuration files for Salt, which normally live in
       the /etc/salt/ directory. Configuration files for packages other than Salt can and  should
       be handled with a Salt State (using a formula type of package).

   Technical Information
       Packages  are  built  using  BZ2-compressed  tarballs. By default, the package database is
       stored using the sqlite3 driver (see Loader Modules below).

       Support for these are built into Python, and so no external dependencies are needed.

       All other files  belonging  to  SPM  use  YAML,  for  portability  and  ease  of  use  and
       maintainability.

   SPM-Specific Loader Modules
       SPM  was  designed  to  behave like traditional package managers, which apply files to the
       filesystem and store package  metadata  in  a  local  database.  However,  because  modern
       infrastructures often extend beyond those use cases, certain parts of SPM have been broken
       out into their own set of modules.

   Package Database
       By default, the package database is stored using  the  sqlite3  module.  This  module  was
       chosen because support for SQLite3 is built into Python itself.

       Please  see  the SPM Development Guide for information on creating new modules for package
       database management.

   Package Files
       By default, package files are installed using the local module. This module applies  files
       to the local filesystem, on the machine that the package is installed on.

       Please  see  the SPM Development Guide for information on creating new modules for package
       file management.

   Distributing SPM Packages
       SPM packages can be distributed to Salt masters over HTTP(S), FTP,  or  through  the  file
       system.  The  SPM  repo  can  be  hosted on any system where you can install Salt. Salt is
       installed so you can run the spm create_repo command when you update or add a  package  to
       the  repo.  SPM  repos  do  not require the salt-master, salt-minion, or any other process
       running on the system.

       NOTE:
          If you are hosting the SPM repo on a system where you can not or do not want to install
          Salt,  you  can  run  the spm create_repo command on the build system and then copy the
          packages and the generated SPM-METADATA file to the repo.  You  can  also  install  SPM
          files directly on a Salt master, bypassing the repository completely.

   Setting up a Package Repository
       After packages are built, the generated SPM files are placed in the srv/spm_build folder.

       Where  you  place the built SPM files on your repository server depends on how you plan to
       make them available to your Salt masters.

       You can share the srv/spm_build folder on the network, or copy the files to  your  FTP  or
       Web server.

   Adding a Package to the repository
       New  packages  are  added  by  simply  copying  the  SPM file to the repo folder, and then
       generating repo metadata.

   Generate Repo Metadata
       Each time you update or add an SPM package to your repository, issue  an  spm  create_repo
       command:

          spm create_repo /srv/spm_build

       SPM generates the repository metadata for all of the packages in that directory and places
       it in an SPM-METADATA file at the folder root. This command is  used  even  if  repository
       metadata already exists in that directory.

   Installing SPM Packages
       SPM  packages  are installed to your Salt master, where they are available to Salt minions
       using all of Salt’s package management functions.

   Configuring Remote Repositories
       Before SPM can use a repository, two things need to happen. First, the Salt  master  needs
       to  know  where  the  repository is through a configuration process. Then it needs to pull
       down the repository metadata.

   Repository Configuration Files
       Repositories are configured by adding each of them to  the  /etc/salt/spm.repos.d/spm.repo
       file  on  each Salt master. This file contains the name of the repository, and the link to
       the repository:

          my_repo:
            url: https://spm.example.com/

       For HTTP/HTTPS Basic authorization you can define credentials:

          my_repo:
            url: https://spm.example.com/
            username: user
            password: pass

       Beware of unauthorized access to this file, please set at least 0640 permissions for  this
       configuration file:

       The URL can use http, https, ftp, or file.

          my_repo:
            url: file:///srv/spm_build

   Updating Local Repository Metadata
       After  the  repository is configured on the Salt master, repository metadata is downloaded
       using the spm update_repo command:

          spm update_repo

       NOTE:
          A file for each repo is placed in /var/cache/salt/spm on the Salt master after you  run
          the update_repo command. If you add a repository and it does not seem to be showing up,
          check this path to verify that the repository was found.

   Update File Roots
       SPM packages are installed to the srv/spm/salt folder on  your  Salt  master.   This  path
       needs to be added to the file roots on your Salt master manually.

          file_roots:
            base:
              1. /srv/salt
              2. /srv/spm/salt

       Restart the salt-master service after updating the file_roots setting.

   Installing Packages
       To install a package, use the spm install command:

          spm install apache

       WARNING:
          Currently,  SPM  does  not check to see if files are already in place before installing
          them. That means that existing files will be overwritten without warning.

   Installing directly from an SPM file
       You can also install SPM packages using a local SPM  file  using  the  spm  local  install
       command:

          spm local install /srv/spm/apache-201506-1.spm

       An SPM repository is not required when using spm local install.

   Pillars
       If  an  installed  package includes Pillar data, be sure to target the installed pillar to
       the necessary systems using the pillar Top file.

   Removing Packages
       Packages may be removed after they are installed using the spm remove command.

          spm remove apache

       If files have been modified, they will not be removed.  Empty  directories  will  also  be
       removed.

   SPM Configuration
       There  are  a  number  of  options that are specific to SPM. They may be configured in the
       master configuration file, or in SPM’s own spm configuration  file  (normally  located  at
       /etc/salt/spm).  If  configured in both places, the spm file takes precedence. In general,
       these values will not need to be changed from the defaults.

   spm_logfile
       Default: /var/log/salt/spm

       Where SPM logs messages.

   spm_repos_config
       Default: /etc/salt/spm.repos

       SPM repositories  are  configured  with  this  file.  There  is  also  a  directory  which
       corresponds to it, which ends in .d. For instance, if the filename is /etc/salt/spm.repos,
       the directory will be /etc/salt/spm.repos.d/.

   spm_cache_dir
       Default: /var/cache/salt/spm

       When SPM updates package repository metadata and downloads packaged, they will  be  placed
       in  this  directory. The package database, normally called packages.db, also lives in this
       directory.

   spm_db
       Default: /var/cache/salt/spm/packages.db

       The location and name of the package database. This database stores the names  of  all  of
       the  SPM packages installed on the system, the files that belong to them, and the metadata
       for those files.

   spm_build_dir
       Default: /srv/spm_build

       When packages are built, they will be placed in this directory.

   spm_build_exclude
       Default: ['.git']

       When SPM builds a package, it normally adds all files in  the  formula  directory  to  the
       package. Files listed here will be excluded from that package. This option requires a list
       to be specified.

          spm_build_exclude:
            - .git
            - .svn

   Types of Packages
       SPM supports different types of formula packages. The function of each package is  denoted
       by its name. For instance, packages which end in -formula are considered to be Salt States
       (the most common type of formula). Packages which end in -conf contain configuration which
       is  to  be  placed in the /etc/salt/ directory. Packages which do not contain one of these
       names are treated as if they have a -formula name.

   formula
       By default, most files from this type of package live in the /srv/spm/salt/ directory. The
       exception  is  the  pillar.example  file,  which will be renamed to <package_name>.sls and
       placed in the pillar directory (/srv/spm/pillar/ by default).

   reactor
       By default, files from this type of package live in the /srv/spm/reactor/ directory.

   conf
       The files in this type of package are configuration files for Salt, which normally live in
       the  /etc/salt/ directory. Configuration files for packages other than Salt can and should
       be handled with a Salt State (using a formula type of package).

   FORMULA File
       In addition to the formula itself, a FORMULA file must exist which describes the  package.
       An example of this file is:

          name: apache
          os: RedHat, Debian, Ubuntu, SUSE, FreeBSD
          os_family: RedHat, Debian, Suse, FreeBSD
          version: 201506
          release: 2
          summary: Formula for installing Apache
          description: Formula for installing Apache

   Required Fields
       This file must contain at least the following fields:

   name
       The  name  of  the  package,  as it will appear in the package filename, in the repository
       metadata, and the package database. Even if the source formula has -formula in  its  name,
       this   name   should   probably  not  include  that.  For  instance,  when  packaging  the
       apache-formula, the name should be set to apache.

   os
       The value of the os grain that this formula supports. This is  used  to  help  users  know
       which operating systems can support this package.

   os_family
       The  value  of  the os_family grain that this formula supports. This is used to help users
       know which operating system families can support this package.

   version
       The version of the package. While it is up to the organization that manages this  package,
       it  is suggested that this version is specified in a YYYYMM format.  For instance, if this
       version was released in June 2015, the package  version  should  be  201506.  If  multiple
       releases are made in a month, the release field should be used.

   minimum_version
       Minimum recommended version of Salt to use this formula. Not currently enforced.

   release
       This  field  refers  primarily  to  a  release of a version, but also to multiple versions
       within a month. In general, if a version has been made public, and immediate updates  need
       to be made to it, this field should also be updated.

   summary
       A one-line description of the package.

   description
       A more detailed description of the package which can contain more than one line.

   Optional Fields
       The following fields may also be present.

   top_level_dir
       This  field  is optional, but highly recommended. If it is not specified, the package name
       will be used.

       Formula repositories typically do not store .sls files in  the  root  of  the  repository;
       instead  they  are  stored  in  a subdirectory. For instance, an apache-formula repository
       would contain a directory called apache, which would contain an init.sls, plus a number of
       other related files. In this instance, the top_level_dir should be set to apache.

       Files  outside  the  top_level_dir,  such  as README.rst, FORMULA, and LICENSE will not be
       installed. The exceptions to this rule are files that are already treated specially,  such
       as pillar.example and _modules/.

   dependencies
       A  comma-separated  list  of packages that must be installed along with this package. When
       this package is installed, SPM will attempt to discover  and  install  these  packages  as
       well. If it is unable to, then it will refuse to install this package.

       This  is  useful  for creating packages which tie together other packages. For instance, a
       package called wordpress-mariadb-apache would depend upon wordpress, mariadb, and apache.

   optional
       A comma-separated list of packages which are related to  this  package,  but  are  neither
       required  nor  necessarily recommended. This list is displayed in an informational message
       when the package is installed to SPM.

   recommended
       A comma-separated list of optional packages that are recommended to be installed with  the
       package.  This list is displayed in an informational message when the package is installed
       to SPM.

   files
       A files section can be added, to specify a list of files  to  add  to  the  SPM.   Such  a
       section might look like:

          files:
            - _pillar
            - FORMULA
            - _runners
            - d|mymodule/index.rst
            - r|README.rst

       When  files  are  specified, then only those files will be added to the SPM, regardless of
       what other files exist in the directory. They will also be added in the  order  specified,
       which is useful if you have a need to lay down files in a specific order.

       As can be seen in the example above, you may also tag files as being a specific type. This
       is done by pre-pending a filename with its type, followed by a  pipe  (|)  character.  The
       above example contains a document file and a readme. The available file types are:

       · c: config file

       · d: documentation file

       · g: ghost file (i.e. the file contents are not included in the package payload)

       · l: license file

       · r: readme file

       · s: SLS file

       · m: Salt module

       The  first  5  of  these  types  (c, d, g, l, r) will be placed in /usr/share/salt/spm/ by
       default. This can be changed by setting  an  spm_share_dir  value  in  your  /etc/salt/spm
       configuration file.

       The last two types (s and m) are currently ignored, but they are reserved for future use.

   Pre and Post States
       It  is  possible to run Salt states before and after installing a package by using pre and
       post states. The following sections may be declared in a FORMULA:

       · pre_local_state

       · pre_tgt_state

       · post_local_state

       · post_tgt_state

       Sections with pre in their name are evaluated before a package is installed  and  sections
       with  post  are  evaluated after a package is installed. local states are evaluated before
       tgt states.

       Each of these sections needs to be evaluated as text, rather than as YAML.   Consider  the
       following block:

          pre_local_state: >
            echo test > /tmp/spmtest:
              cmd:
                - run

       Note  that this declaration uses > after pre_local_state. This is a YAML marker that marks
       the next multi-line block as text, including newlines. It is important to use this  marker
       whenever  declaring  pre  or  post  states, so that the text following it can be evaluated
       properly.

   local States
       local states are evaluated locally; this is analogous to  issuing  a  state  run  using  a
       salt-call  --local command. These commands will be issued on the local machine running the
       spm command, whether that machine is a master or a minion.

       local states do not require any special arguments, but they must still use the > marker to
       denote that the state is evaluated as text, not a data structure.

          pre_local_state: >
            echo test > /tmp/spmtest:
              cmd:
                - run

   tgt States
       tgt  states are issued against a remote target. This is analogous to issuing a state using
       the salt command. As such it requires that the machine that the spm command is running  on
       is a master.

       Because  tgt  states  require  that  a target be specified, their code blocks are a little
       different. Consider the following state:

          pre_tgt_state:
            tgt: '*'
            data: >
              echo test > /tmp/spmtest:
                cmd:
                  - run

       With tgt states, the state data is placed under a data  section,  inside  the  *_tgt_state
       code block. The target is of course specified as a tgt and you may also optionally specify
       a tgt_type (the default is glob).

       You still need to use the > marker, but this time it follows the data  line,  rather  than
       the *_tgt_state line.

   Templating States
       The  reason  that  state  data  must  be evaluated as text rather than a data structure is
       because that state data is first processed through the rendering engine, as  it  would  be
       with a standard state run.

       This  means  that  you  can  use Jinja or any other supported renderer inside of Salt. All
       formula variables are available to the renderer, so you can reference FORMULA data  inside
       your state if you need to:

          pre_tgt_state:
            tgt: '*'
            data: >
               echo {{ name }} > /tmp/spmtest:
                cmd:
                  - run

       You  may also declare your own variables inside the FORMULA. If SPM doesn’t recognize them
       then it will ignore them, so there are no  restrictions  on  variable  names,  outside  of
       avoiding reserved words.

       By default the renderer is set to yaml_jinja. You may change this by changing the renderer
       setting in the FORMULA itself.

   Building a Package
       Once a FORMULA file has been created, it is placed into the root of the formula that is to
       be  turned  into  a  package.  The  spm  build command is used to turn that formula into a
       package:

          spm build /path/to/saltstack-formulas/apache-formula

       The resulting file will be placed in the build directory. By  default  this  directory  is
       located at /srv/spm/.

   Loader Modules
       When  an  execution  module  is  placed  in  <file_roots>/_modules/ on the master, it will
       automatically be synced to minions, the next time a  sync  operation  takes  place.  Other
       modules are also propagated this way: state modules can be placed in _states/, and so on.

       When  SPM  detects  a  file  in  a package which resides in one of these directories, that
       directory will be placed in <file_roots> instead of in the formula directory with the rest
       of the files.

   Removing Packages
       Packages may be removed once they are installed using the spm remove command.

          spm remove apache

       If  files  have  been  modified,  they will not be removed. Empty directories will also be
       removed.

   Technical Information
       Packages are built using BZ2-compressed tarballs. By  default,  the  package  database  is
       stored using the sqlite3 driver (see Loader Modules below).

       Support for these are built into Python, and so no external dependencies are needed.

       All  other  files  belonging  to  SPM  use  YAML,  for  portability  and  ease  of use and
       maintainability.

   SPM-Specific Loader Modules
       SPM was designed to behave like traditional package managers, which  apply  files  to  the
       filesystem  and  store  package  metadata  in  a  local  database. However, because modern
       infrastructures often extend beyond those use cases, certain parts of SPM have been broken
       out into their own set of modules.

   Package Database
       By  default,  the  package  database  is  stored using the sqlite3 module. This module was
       chosen because support for SQLite3 is built into Python itself.

       Please see the SPM Development Guide for information on creating new modules  for  package
       database management.

   Package Files
       By  default, package files are installed using the local module. This module applies files
       to the local filesystem, on the machine that the package is installed on.

       Please see the SPM Development Guide for information on creating new modules  for  package
       file management.

   Types of Packages
       SPM  supports different types of formula packages. The function of each package is denoted
       by its name. For instance, packages which end in -formula are considered to be Salt States
       (the most common type of formula). Packages which end in -conf contain configuration which
       is to be placed in the /etc/salt/ directory. Packages which do not contain  one  of  these
       names are treated as if they have a -formula name.

   formula
       By default, most files from this type of package live in the /srv/spm/salt/ directory. The
       exception is the pillar.example file, which will  be  renamed  to  <package_name>.sls  and
       placed in the pillar directory (/srv/spm/pillar/ by default).

   reactor
       By default, files from this type of package live in the /srv/spm/reactor/ directory.

   conf
       The files in this type of package are configuration files for Salt, which normally live in
       the /etc/salt/ directory. Configuration files for packages other than Salt can and  should
       be handled with a Salt State (using a formula type of package).

   SPM Development Guide
       This document discusses developing additional code for SPM.

   SPM-Specific Loader Modules
       SPM  was  designed  to  behave like traditional package managers, which apply files to the
       filesystem and store package  metadata  in  a  local  database.  However,  because  modern
       infrastructures often extend beyond those use cases, certain parts of SPM have been broken
       out into their own set of modules.

       Each function that accepts arguments has a set of required and  optional  arguments.  Take
       note  that SPM will pass all arguments in, and therefore each function must accept each of
       those arguments. However, arguments that are marked as required are crucial to SPM’s  core
       functionality,  while  arguments  that are marked as optional are provided as a benefit to
       the module, if it needs to use them.

   Package Database
       By default, the package database is stored using  the  sqlite3  module.  This  module  was
       chosen because support for SQLite3 is built into Python itself.

       Modules  for  managing the package database are stored in the salt/spm/pkgdb/ directory. A
       number of functions must exist to support database management.

   init()
       Get a database connection, and initialize the package database if necessary.

       This function accepts no arguments. If a database is  used  which  supports  a  connection
       object,  then that connection object is returned. For instance, the sqlite3 module returns
       a connect() object from the sqlite3 library:

          conn = sqlite3.connect(__opts__['spm_db'], isolation_level=None)
          ...
          return conn

       SPM itself will not use this connection object; it will be passed in as-is  to  the  other
       functions  in  the module. Therefore, when you set up this object, make sure to do so in a
       way that is easily usable throughout the module.

   info()
       Return information for a package. This generally  consists  of  the  information  that  is
       stored in the FORMULA file in the package.

       The arguments that are passed in, in order, are package (required) and conn (optional).

       package  is  the name of the package, as specified in the FORMULA.  conn is the connection
       object returned from init().

   list_files()
       Return a list of files for an installed package. Only the filename should be returned, and
       no other information.

       The arguments that are passed in, in order, are package (required) and conn (optional).

       package  is  the name of the package, as specified in the FORMULA.  conn is the connection
       object returned from init().

   register_pkg()
       Register a package in the package database. Nothing is expected to be returned  from  this
       function.

       The  arguments  that are passed in, in order, are name (required), formula_def (required),
       and conn (optional).

       name is the name of the package, as specified in the FORMULA.  formula_def is the contents
       of the FORMULA file, as a dict. conn is the connection object returned from init().

   register_file()
       Register  a  file  in  the  package database. Nothing is expected to be returned from this
       function.

       The arguments that are passed in are name (required), member (required), path  (required),
       digest (optional), and conn (optional).

       name is the name of the package.

       member  is a tarfile object for the package file. It is included, because it contains most
       of the information for the file.

       path is the location of the file on the local filesystem.

       digest is the SHA1 checksum of the file.

       conn is the connection object returned from init().

   unregister_pkg()
       Unregister a package from the package database. This usually only  involves  removing  the
       package’s record from the database. Nothing is expected to be returned from this function.

       The arguments that are passed in, in order, are name (required) and conn (optional).

       name  is  the  name  of  the  package, as specified in the FORMULA. conn is the connection
       object returned from init().

   unregister_file()
       Unregister a package from the package database. This usually only  involves  removing  the
       package’s record from the database. Nothing is expected to be returned from this function.

       The  arguments  that are passed in, in order, are name (required), pkg (optional) and conn
       (optional).

       name is the path of the file, as it was installed on the filesystem.

       pkg is the name of the package that the file belongs to.

       conn is the connection object returned from init().

   db_exists()
       Check to see whether the package database already exists. This is the path to the  package
       database file. This function will return True or False.

       The only argument that is expected is db_, which is the package database file.

   Package Files
       By  default, package files are installed using the local module. This module applies files
       to the local filesystem, on the machine that the package is installed on.

       Modules for managing the package database are stored in the salt/spm/pkgfiles/  directory.
       A number of functions must exist to support file management.

   init()
       Initialize  the  installation  location  for  the  package  files.  Normally these will be
       directory paths, but other external destinations such as databases can be used.  For  this
       reason,  this  function  will  return a connection object, which can be a database object.
       However, in the default local module, this object is a dict  containing  the  paths.  This
       object will be passed into all other functions.

       Three   directories   are  used  for  the  destinations:  formula_path,  pillar_path,  and
       reactor_path.

       formula_path is the location of most of the files that will be installed.  The default  is
       specific to the operating system, but is normally /srv/salt/.

       pillar_path  is  the  location  that  the  pillar.example  file will be installed to.  The
       default is specific to the operating system, but is normally /srv/pillar/.

       reactor_path is the location that reactor files will  be  installed  to.  The  default  is
       specific to the operating system, but is normally /srv/reactor/.

   check_existing()
       Check the filesystem for existing files. All files for the package will be checked, and if
       any are existing, then this function will normally state that SPM will refuse  to  install
       the package.

       This function returns a list of the files that exist on the system.

       The  arguments  that  are  passed  into  this  function are, in order: package (required),
       pkg_files (required), formula_def (formula_def), and conn (optional).

       package is the name of the package that is to be installed.

       pkg_files is a list of the files to be checked.

       formula_def is a copy of the information that is stored in the FORMULA file.

       conn is the file connection object.

   install_file()
       Install a single file to the destination (normally on the filesystem). Nothing is expected
       to be returned from this function.

       This function returns the final location that the file was installed to.

       The  arguments  that  are  passed  into  this  function are, in order, package (required),
       formula_tar (required), member (required), formula_def (required), and conn (optional).

       package is the name of the package that is to be installed.

       formula_tar is the tarfile object for the package. This is passed in so that the  function
       can call formula_tar.extract() for the file.

       member is the tarfile object which represents the individual file. This may be modified as
       necessary, before being passed into formula_tar.extract().

       formula_def is a copy of the information from the FORMULA file.

       conn is the file connection object.

   remove_file()
       Remove a single file from  file  system.  Normally  this  will  be  little  more  than  an
       os.remove(). Nothing is expected to be returned from this function.

       The  arguments  that are passed into this function are, in order, path (required) and conn
       (optional).

       path is the absolute path to the file to be removed.

       conn is the file connection object.

   hash_file()
       Returns the hexdigest hash value of a file.

       The arguments that are passed into this function are, in order, path  (required),  hashobj
       (required), and conn (optional).

       path is the absolute path to the file.

       hashobj  is  a  reference to hashlib.sha1(), which is used to pull the hexdigest() for the
       file.

       conn is the file connection object.

       This function will not generally be more complex than:

          def hash_file(path, hashobj, conn=None):
              with salt.utils.files.fopen(path, 'r') as f:
                  hashobj.update(f.read())
                  return hashobj.hexdigest()

   path_exists()
       Check to see whether the file already exists on the filesystem. Returns True or False.

       This function expects a path argument, which is the  absolute  path  to  the  file  to  be
       checked.

   path_isdir()
       Check to see whether the path specified is a directory. Returns True or False.

       This function expects a path argument, which is the absolute path to be checked.

   Storing Data in Other Databases
       The  SDB interface is designed to store and retrieve data that, unlike pillars and grains,
       is not necessarily minion-specific. The initial design goal was to allow passwords  to  be
       stored  in  a  secure database, such as one managed by the keyring package, rather than as
       plain-text files. However, as a generic database interface, it could conceptually be  used
       for a number of other purposes.

       SDB was added to Salt in version 2014.7.0.

   SDB Configuration
       In  order  to  use  the  SDB  interface,  a  configuration  profile must be set up.  To be
       available for master commands, such as runners, it needs to be configured  in  the  master
       configuration.  For  modules  executed  on  a  minion,  it can be set either in the minion
       configuration file, or as a pillar. The configuration stanza includes the name/ID that the
       profile  will  be  referred  to  as,  a  driver  setting, and any other arguments that are
       necessary for the SDB module that will be used. For instance, a profile called  mykeyring,
       which uses the system service in the keyring module would look like:

          mykeyring:
            driver: keyring
            service: system

       It  is recommended to keep the name of the profile simple, as it is used in the SDB URI as
       well.

   SDB URIs
       SDB is designed to make small database queries (hence the name, SDB) using a compact  URL.
       This  allows  users  to  reference  a  database  value  quickly  inside  a  number of Salt
       configuration areas, without a lot of overhead. The basic format of an SDB URI is:

          sdb://<profile>/<args>

       The profile refers to the configuration profile defined in either the master or the minion
       configuration  file.  The  args are specific to the module referred to in the profile, but
       will typically only need to refer to the key of a key/value pair inside the database. This
       is because the profile itself should define as many other parameters as possible.

       For  example,  a profile might be set up to reference credentials for a specific OpenStack
       account. The profile might look like:

          kevinopenstack:
            driver: keyring
            service: salt.cloud.openstack.kevin

       And the URI used to reference the password might look like:

          sdb://kevinopenstack/password

   Getting, Setting and Deleting SDB Values
       Once an SDB driver is configured, you can use the sdb execution module  to  get,  set  and
       delete  values  from it. There are two functions that may appear in most SDB modules: get,
       set and delete.

       Getting a value requires only the SDB URI to be specified. To retrieve a  value  from  the
       kevinopenstack profile above, you would use:

          salt-call sdb.get sdb://kevinopenstack/password

       Setting  a  value uses the same URI as would be used to retrieve it, followed by the value
       as another argument.

          salt-call sdb.set 'sdb://myvault/secret/salt/saltstack' 'super awesome'

       Deleting values (if supported by the driver) is done pretty much the same way  as  getting
       them.  Provided  that  you  have a profile called mykvstore that uses a driver allowing to
       delete values you would delete a value as shown below:

          salt-call sdb.delete 'sdb://mykvstore/foobar'

       The sdb.get, sdb.set and sdb.delete functions are also available in the runner system:

          salt-run sdb.get 'sdb://myvault/secret/salt/saltstack'
          salt-run sdb.set 'sdb://myvault/secret/salt/saltstack' 'super awesome'
          salt-run sdb.delete 'sdb://mykvstore/foobar'

   Using SDB URIs in Files
       SDB URIs can be used in both configuration files, and files  that  are  processed  by  the
       renderer  system  (jinja,  mako, etc.). In a configuration file (such as /etc/salt/master,
       /etc/salt/minion, /etc/salt/cloud, etc.), make an entry as usual, and set the value to the
       SDB URI. For instance:

          mykey: sdb://myetcd/mykey

       To  retrieve  this  value  using  a module, the module in question must use the config.get
       function to retrieve configuration values. This would look something like:

          mykey = __salt__['config.get']('mykey')

       Templating renderers use a similar construct. To get the mykey value from above in  Jinja,
       you would use:

          {{ salt['config.get']('mykey') }}

       When  retrieving  data  from  configuration  files using config.get, the SDB URI need only
       appear in the configuration file itself.

       If you would like to retrieve a key directly from SDB, you would call the sdb.get function
       directly, using the SDB URI. For instance, in Jinja:

          {{ salt['sdb.get']('sdb://myetcd/mykey') }}

       When  writing Salt modules, it is not recommended to call sdb.get directly, as it requires
       the user to provide values in SDB, using a specific URI. Use config.get instead.

   Writing SDB Modules
       There is currently one function that MUST exist in any SDB module (get()), one that SHOULD
       exist  (set_())  and  one  that  MAY  exist  (delete()).  If  using a (set_()) function, a
       __func_alias__ dictionary MUST be declared in the module as well:

          __func_alias__ = {
              'set_': 'set',
          }

       This is because set is a Python built-in, and therefore functions should  not  be  created
       which  are  called  set().  The __func_alias__ functionality is provided via Salt’s loader
       interfaces, and allows legally-named functions to be referred to using  names  that  would
       otherwise be unwise to use.

       The  get()  function is required, as it will be called via functions in other areas of the
       code which make use of the sdb:// URI. For example, the config.get function in the  config
       execution module uses this function.

       The  set_()  function  may  be  provided,  but  is  not  required,  as some sources may be
       read-only, or may be otherwise unwise to access via a URI (for instance,  because  of  SQL
       injection attacks).

       The delete() function may be provided as well, but is not required, as many sources may be
       read-only or restrict such operations.

       A simple example of an SDB module is salt/sdb/keyring_db.py, as it provides basic examples
       of  most,  if  not  all, of the types of functionality that are available not only for SDB
       modules, but for Salt modules in general.

   Running the Salt Master/Minion as an Unprivileged User
       While the default setup runs the master and minion as the root user, some may consider  it
       an extra measure of security to run the master as a non-root user. Keep in mind that doing
       so does not change the master’s capability to access minions as the user they are  running
       as.  Due  to  this many feel that running the master as a non-root user does not grant any
       real security advantage which is why the master has remained as root by default.

       NOTE:
          Some of Salt’s operations cannot execute correctly when the master is  not  running  as
          root,  specifically  the  pam external auth system, as this system needs root access to
          check authentication.

       As of Salt 0.9.10 it is possible to run Salt as a non-root  user.  This  can  be  done  by
       setting  the  user  parameter  in  the  master  configuration  file.  and  restarting  the
       salt-master service.

       The minion has it’s own user parameter as well, but running the minion as an  unprivileged
       user  will  keep  it  from  making  changes to things like users, installed packages, etc.
       unless access controls (sudo, etc.) are setup on the minion to permit the non-root user to
       make the needed changes.

       In  order to allow Salt to successfully run as a non-root user, ownership, and permissions
       need to be set such that the desired user  can  read  from  and  write  to  the  following
       directories (and their subdirectories, where applicable):

       · /etc/salt

       · /var/cache/salt

       · /var/log/salt

       · /var/run/salt

       Ownership can be easily changed with chown, like so:

          # chown -R user /etc/salt /var/cache/salt /var/log/salt /var/run/salt

       WARNING:
          Running  either  the master or minion with the root_dir parameter specified will affect
          these paths, as will setting  options  like  pki_dir,  cachedir,  log_file,  and  other
          options that normally live in the above directories.

   Using cron with Salt
       The Salt Minion can initiate its own highstate using the salt-call command.

          $ salt-call state.apply

       This  will  cause  the  minion to check in with the master and ensure it is in the correct
       “state”.

   Use cron to initiate a highstate
       If you would like the Salt Minion to regularly check in with the master you can  use  cron
       to run the salt-call command:

          0 0 * * * salt-call state.apply

       The above cron entry will run a highstate every day at midnight.

       NOTE:
          When  executing  Salt  using  cron, keep in mind that the default PATH for cron may not
          include the path for any scripts or commands used by Salt, and it may be  necessary  to
          set the PATH accordingly in the crontab:

              PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/opt/bin

              0 0 * * * salt-call state.apply

   Hardening Salt
       This  topic  contains tips you can use to secure and harden your Salt environment. How you
       best secure and harden your Salt environment depends heavily on how you  use  Salt,  where
       you  use  Salt,  how  your  team is structured, where you get data from, and what kinds of
       access (internal and external) you require.

   General hardening tips
       · Restrict who can directly log into your Salt master system.

       · Use SSH keys secured with a passphrase to gain access to the Salt master system.

       · Track and secure SSH keys and any other login credentials you and your team need to gain
         access to the Salt master system.

       · Use a hardened bastion server or a VPN to restrict direct access to the Salt master from
         the internet.

       · Don’t expose the Salt master any more than what is required.

       · Harden the system as you would with any high-priority target.

       · Keep the system patched and up-to-date.

       · Use tight firewall rules.

   Salt hardening tips
       · Subscribe to salt-users or  salt-announce  so  you  know  when  new  Salt  releases  are
         available. Keep your systems up-to-date with the latest patches.

       · Use  Salt’s  Client  ACL  system to avoid having to give out root access in order to run
         Salt commands.

       · Use Salt’s Client ACL system to restrict which users can run what commands.

       · Use external Pillar to pull data into Salt from external sources so  that  non-sysadmins
         (other  teams,  junior  admins,  developers, etc) can provide configuration data without
         needing access to the Salt master.

       · Make  heavy  use  of  SLS  files  that  are  version-controlled   and   go   through   a
         peer-review/code-review  process  before they’re deployed and run in production. This is
         good advice even for  “one-off”  CLI  commands  because  it  helps  mitigate  typos  and
         mistakes.

       · Use salt-api, SSL, and restrict authentication with the external auth system if you need
         to expose your Salt master to external services.

       · Make use of Salt’s event system and reactor to allow minions to signal the  Salt  master
         without requiring direct access.

       · Run the salt-master daemon as non-root.

       · Disable  which  modules  are  loaded onto minions with the disable_modules setting. (for
         example, disable the cmd module if it makes sense in your environment.)

       · Look through the fully-commented sample master and minion config files. There  are  many
         options for securing an installation.

       · Run masterless-mode minions on particularly sensitive minions. There is also salt-ssh or
         the modules.sudo if you need to further restrict a minion.

   Security disclosure policy
       email  security@saltstack.com

       gpg key ID
              4EA0793D

       gpg key fingerprint
              8ABE 4EFC F0F4 B24B FF2A  AF90 D570 F2D3 4EA0 793D

       gpg public key:

          -----BEGIN PGP PUBLIC KEY BLOCK-----
          Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

          mQINBFO15mMBEADa3CfQwk5ED9wAQ8fFDku277CegG3U1hVGdcxqKNvucblwoKCb
          hRK6u9ihgaO9V9duV2glwgjytiBI/z6lyWqdaD37YXG/gTL+9Md+qdSDeaOa/9eg
          7y+g4P+FvU9HWUlujRVlofUn5Dj/IZgUywbxwEybutuzvvFVTzsn+DFVwTH34Qoh
          QIuNzQCSEz3Lhh8zq9LqkNy91ZZQO1ZIUrypafspH6GBHHcE8msBFgYiNBnVcUFH
          u0r4j1Rav+621EtD5GZsOt05+NJI8pkaC/dDKjURcuiV6bhmeSpNzLaXUhwx6f29
          Vhag5JhVGGNQxlRTxNEM86HEFp+4zJQ8m/wRDrGX5IAHsdESdhP+ljDVlAAX/ttP
          /Ucl2fgpTnDKVHOA00E515Q87ZHv6awJ3GL1veqi8zfsLaag7rw1TuuHyGLOPkDt
          t5PAjsS9R3KI7pGnhqI6bTOi591odUdgzUhZChWUUX1VStiIDi2jCvyoOOLMOGS5
          AEYXuWYP7KgujZCDRaTNqRDdgPd93Mh9JI8UmkzXDUgijdzVpzPjYgFaWtyK8lsc
          Fizqe3/Yzf9RCVX/lmRbiEH+ql/zSxcWlBQd17PKaL+TisQFXcmQzccYgAxFbj2r
          QHp5ABEu9YjFme2Jzun7Mv9V4qo3JF5dmnUk31yupZeAOGZkirIsaWC3hwARAQAB
          tDBTYWx0U3RhY2sgU2VjdXJpdHkgVGVhbSA8c2VjdXJpdHlAc2FsdHN0YWNrLmNv
          bT6JAj4EEwECACgFAlO15mMCGwMFCQeGH4AGCwkIBwMCBhUIAgkKCwQWAgMBAh4B
          AheAAAoJENVw8tNOoHk9z/MP/2vzY27fmVxU5X8joiiturjlgEqQw41IYEmWv1Bw
          4WVXYCHP1yu/1MC1uuvOmOd5BlI8YO2C2oyW7d1B0NorguPtz55b7jabCElekVCh
          h/H4ZVThiwqgPpthRv/2npXjIm7SLSs/kuaXo6Qy2JpszwDVFw+xCRVL0tH9KJxz
          HuNBeVq7abWD5fzIWkmGM9hicG/R2D0RIlco1Q0VNKy8klG+pOFOW886KnwkSPc7
          JUYp1oUlHsSlhTmkLEG54cyVzrTP/XuZuyMTdtyTc3mfgW0adneAL6MARtC5UB/h
          q+v9dqMf4iD3wY6ctu8KWE8Vo5MUEsNNO9EA2dUR88LwFZ3ZnnXdQkizgR/Aa515
          dm17vlNkSoomYCo84eN7GOTfxWcq+iXYSWcKWT4X+h/ra+LmNndQWQBRebVUtbKE
          ZDwKmiQz/5LY5EhlWcuU4lVmMSFpWXt5FR/PtzgTdZAo9QKkBjcv97LYbXvsPI69
          El1BLAg+m+1UpE1L7zJT1il6PqVyEFAWBxW46wXCCkGssFsvz2yRp0PDX8A6u4yq
          rTkt09uYht1is61joLDJ/kq3+6k8gJWkDOW+2NMrmf+/qcdYCMYXmrtOpg/wF27W
          GMNAkbdyzgeX/MbUBCGCMdzhevRuivOI5bu4vT5s3KdshG+yhzV45bapKRd5VN+1
          mZRquQINBFO15mMBEAC5UuLii9ZLz6qHfIJp35IOW9U8SOf7QFhzXR7NZ3DmJsd3
          f6Nb/habQFIHjm3K9wbpj+FvaW2oWRlFVvYdzjUq6c82GUUjW1dnqgUvFwdmM835
          1n0YQ2TonmyaF882RvsRZrbJ65uvy7SQxlouXaAYOdqwLsPxBEOyOnMPSktW5V2U
          IWyxsNP3sADchWIGq9p5D3Y/loyIMsS1dj+TjoQZOKSj7CuRT98+8yhGAY8YBEXu
          9r3I9o6mDkuPpAljuMc8r09Im6az2egtK/szKt4Hy1bpSSBZU4W/XR7XwQNywmb3
          wxjmYT6Od3Mwj0jtzc3gQiH8hcEy3+BO+NNmyzFVyIwOLziwjmEcw62S57wYKUVn
          HD2nglMsQa8Ve0e6ABBMEY7zGEGStva59rfgeh0jUMJiccGiUDTMs0tdkC6knYKb
          u/fdRqNYFoNuDcSeLEw4DdCuP01l2W4yY+fiK6hAcL25amjzc+yYo9eaaqTn6RAT
          bzdhHQZdpAMxY+vNT0+NhP1Zo5gYBMR65Zp/VhFsf67ijb03FUtdw9N8dHwiR2m8
          vVA8kO/gCD6wS2p9RdXqrJ9JhnHYWjiVuXR+f755ZAndyQfRtowMdQIoiXuJEXYw
          6XN+/BX81gJaynJYc0uw0MnxWQX+A5m8HqEsbIFUXBYXPgbwXTm7c4IHGgXXdwAR
          AQABiQIlBBgBAgAPBQJTteZjAhsMBQkHhh+AAAoJENVw8tNOoHk91rcQAIhxLv4g
          duF/J1Cyf6Wixz4rqslBQ7DgNztdIUMjCThg3eB6pvIzY5d3DNROmwU5JvGP1rEw
          hNiJhgBDFaB0J/y28uSci+orhKDTHb/cn30IxfuAuqrv9dujvmlgM7JUswOtLZhs
          5FYGa6v1RORRWhUx2PQsF6ORg22QAaagc7OlaO3BXBoiE/FWsnEQCUsc7GnnPqi7
          um45OJl/pJntsBUKvivEU20fj7j1UpjmeWz56NcjXoKtEvGh99gM5W2nSMLE3aPw
          vcKhS4yRyLjOe19NfYbtID8m8oshUDji0XjQ1z5NdGcf2V1YNGHU5xyK6zwyGxgV
          xZqaWnbhDTu1UnYBna8BiUobkuqclb4T9k2WjbrUSmTwKixokCOirFDZvqISkgmN
          r6/g3w2TRi11/LtbUciF0FN2pd7rj5mWrOBPEFYJmrB6SQeswWNhr5RIsXrQd/Ho
          zvNm0HnUNEe6w5YBfA6sXQy8B0Zs6pcgLogkFB15TuHIIIpxIsVRv5z8SlEnB7HQ
          Io9hZT58yjhekJuzVQB9loU0C/W0lzci/pXTt6fd9puYQe1DG37pSifRG6kfHxrR
          if6nRyrfdTlawqbqdkoqFDmEybAM9/hv3BqriGahGGH/hgplNQbYoXfNwYMYaHuB
          aSkJvrOQW8bpuAzgVyd7TyNFv+t1kLlfaRYJ
          =wBTJ

          -----END PGP PUBLIC KEY BLOCK-----
       The SaltStack Security Team is available at  security@saltstack.com  for  security-related
       bug reports or questions.

       We  request the disclosure of any security-related bugs or issues be reported non-publicly
       until such time as the issue can be resolved and a security-fix release can  be  prepared.
       At  that  time  we  will  release  the  fix  and  make  a public announcement with upgrade
       instructions and download locations.

   Security response procedure
       SaltStack takes security and the trust of our customers  and  users  very  seriously.  Our
       disclosure  policy  is  intended  to  resolve  security issues as quickly and safely as is
       possible.

       1. A security report sent to security@saltstack.com is assigned to  a  team  member.  This
          person  is  the primary contact for questions and will coordinate the fix, release, and
          announcement.

       2. The reported issue is reproduced  and  confirmed.  A  list  of  affected  projects  and
          releases is made.

       3. Fixes  are  implemented  for  all  affected  projects  and  releases  that are actively
          supported. Back-ports of the fix are  made  to  any  old  releases  that  are  actively
          supported.

       4. Packagers  are  notified via the salt-packagers mailing list that an issue was reported
          and resolved, and that an announcement is incoming.

       5. A new release  is  created  and  pushed  to  all  affected  repositories.  The  release
          documentation  provides  a full description of the issue, plus any upgrade instructions
          or other relevant details.

       6. An announcement is  made  to  the  salt-users  and  salt-announce  mailing  lists.  The
          announcement  contains  a  description  of  the  issue  and  a link to the full release
          documentation and download locations.

   Receiving security announcements
       The fastest place to receive security announcements is via the salt-announce mailing list.
       This list is low-traffic.

   Salt Transport
       One of fundamental features of Salt is remote execution. Salt has two basic “channels” for
       communicating with minions. Each channel requires a client (minion) and a server  (master)
       implementation  to  work  within  Salt.  These  pairs  of  channels  will work together to
       implement the specific message passing required by the channel interface.

   Pub Channel
       The pub channel, or publish channel, is how a master sends a job (payload)  to  a  minion.
       This  is a basic pub/sub paradigm, which has specific targeting semantics.  All data which
       goes across the publish system should be encrypted such that  only  members  of  the  Salt
       cluster can decrypt the publishes.

   Req Channel
       The  req  channel  is how the minions send data to the master. This interface is primarily
       used for fetching files and returning  job  returns.  The  req  channels  have  two  basic
       interfaces  when  talking  to  the  master.  send  is the basic method that guarantees the
       message is encrypted at least so that only minions attached to the same  master  can  read
       it–    but    no    guarantee    of    minion-master    confidentiality,    whereas    the
       crypted_transfer_decode_dictentry method does guarantee minion-master confidentiality.

   Zeromq Transport
       NOTE:
          Zeromq is the current default transport within Salt

       Zeromq is a messaging library with bindings  into  many  languages.  Zeromq  implements  a
       socket interface for message passing, with specific semantics for the socket type.

   Pub Channel
       The  pub  channel  is  implemented using zeromq’s pub/sub sockets. By default we don’t use
       zeromq’s filtering, which means that all publish jobs are sent to all minions and filtered
       minion  side. Zeromq does have publisher side filtering which can be enabled in salt using
       zmq_filtering.

   Req Channel
       The req channel is implemented using zeromq’s req/rep sockets.  These  sockets  enforce  a
       send/recv  pattern,  which  forces  salt to serialize messages through these socket pairs.
       This means that although the interface is asynchronous on the  minion  we  cannot  send  a
       second message until we have received the reply of the first message.

   TCP Transport
       The  tcp  transport  is an implementation of Salt’s channels using raw tcp sockets.  Since
       this isn’t using a pre-defined messaging library  we  will  describe  the  wire  protocol,
       message semantics, etc. in this document.

       The  tcp transport is enabled by changing the transport setting to tcp on each Salt minion
       and Salt master.

          transport: tcp

       WARNING:
          We currently recommend that when using Syndics that all Masters  and  Minions  use  the
          same  transport.  We’re  investigating  a report of an error when using mixed transport
          types at very heavy loads.

   Wire Protocol
       This implementation over TCP focuses on flexibility over absolute efficiency.  This  means
       we  are  okay to spend a couple of bytes of wire space for flexibility in the future. That
       being said, the wire framing is quite efficient and looks like:

          msgpack({'head': SOMEHEADER, 'body': SOMEBODY})

       Since msgpack is an iterably parsed serialization, we  can  simply  write  the  serialized
       payload  to  the  wire.  Within  that  payload  we have two items “head” and “body”.  Head
       contains header information (such as “message id”). The Body contains the  actual  message
       that  we  are  sending.  With  this  flexible  wire  protocol we can implement any message
       semantics that we’d like– including multiplexed message passing on a single socket.

   TLS Support
       New in version 2016.11.1.

       The TCP transport allows for the master/minion communication to be optionally wrapped in a
       TLS  connection.  Enabling  this is simple, the master and minion need to be using the tcp
       connection, then the ssl option is enabled. The  ssl  option  is  passed  as  a  dict  and
       corresponds     to     the     options    passed    to    the    Python    ssl.wrap_socket
       <https://docs.python.org/2/library/ssl.html#ssl.wrap_socket> function.

       A simple setup looks like this, on the Salt Master  add  the  ssl  option  to  the  master
       configuration file:

          ssl:
            keyfile: <path_to_keyfile>
            certfile: <path_to_certfile>
            ssl_version: PROTOCOL_TLSv1_2

       The minimal ssl option in the minion configuration file looks like this:

          ssl: True
          # Versions below 2016.11.4:
          ssl: {}

       Specific  options can be sent to the minion also, as defined in the Python ssl.wrap_socket
       function.

       NOTE:
          While setting the ssl_version is not required, we recommend it. Some older versions  of
          python  do not support the latest TLS protocol and if this is the case for your version
          of python we strongly recommend upgrading your version of Python.

   Crypto
       The current implementation uses the same crypto as the zeromq transport.

   Pub Channel
       For the pub channel we send messages without “message ids” which the remote end interprets
       as a one-way send.

       NOTE:
          As of today we send all publishes to all minions and rely on minion-side filtering.

   Req Channel
       For  the  req channel we send messages with a “message id”. This “message id” allows us to
       multiplex messages across the socket.

   The RAET Transport
       NOTE:
          The RAET transport is in very early development, it is functional but no  promises  are
          yet  made  as  to  its  reliability  or security.  As for reliability and security, the
          encryption used has been audited and our tests show that raet is  reliable.  With  this
          said  we  are  still conducting more security audits and pushing the reliability.  This
          document outlines the encryption used in RAET

       New in version 2014.7.0.

       The Reliable Asynchronous Event Transport, or RAET, is  an  alternative  transport  medium
       developed specifically with Salt in mind. It has been developed to allow queuing to happen
       up on the application layer and comes with socket layer encryption. It  also  abstracts  a
       great  deal  of  control  over  the socket layer and makes it easy to bubble up errors and
       exceptions.

       RAET also offers very powerful message routing capabilities, allowing for messages  to  be
       routed  between  processes  on  a  single  machine all the way up to processes on multiple
       machines. Messages can also be restricted, allowing  processes  to  be  sent  messages  of
       specific types from specific sources allowing for trust to be established.

   Using RAET in Salt
       Using  RAET  in  Salt  is  easy, the main difference is that the core dependencies change,
       instead of needing pycrypto, M2Crypto, ZeroMQ, and PYZMQ, the packages libsodium, libnacl,
       ioflo,  and  raet  are  required. Encryption is handled very cleanly by libnacl, while the
       queueing and flow control is handled by ioflo. Distribution packages are forthcoming,  but
       libsodium  can be easily installed from source, or many distributions do ship packages for
       it.  The libnacl and ioflo packages  can  be  easily  installed  from  pypi,  distribution
       packages are in the works.

       Once  the  new  deps  are  installed  the  2014.7  release  or  higher of Salt needs to be
       installed.

       Once installed, modify the configuration files for  the  minion  and  master  to  set  the
       transport to raet:

       /etc/salt/master:

          transport: raet

       /etc/salt/minion:

          transport: raet

       Now  start salt as it would normally be started, the minion will connect to the master and
       share long term keys, which can then in turn be managed via salt-key. Remote execution and
       salt states will function in the same way as with Salt over ZeroMQ.

   Limitations
       The  2014.7  release  of  RAET  is not complete! The Syndic and Multi Master have not been
       completed yet and these are slated for completion in the 2015.5.0 release.

       Also, Salt-Raet allows for more control over the client but  these  hooks  have  not  been
       implemented  yet,  thereforre  the client still uses the same system as the ZeroMQ client.
       This means that the extra reliability that RAET exposes has not yet  been  implemented  in
       the CLI client.

   Why?
   Customer and User Request
       Why  make  an  alternative  transport  for  Salt?  There are many reasons, but the primary
       motivation came from customer requests, many large companies came  with  requests  to  run
       Salt over an alternative transport, the reasoning was varied, from performance and scaling
       improvements to licensing concerns. These customers have partnered with SaltStack to  make
       RAET a reality.

   More Capabilities
       RAET  has  been  designed to allow salt to have greater communication capabilities. It has
       been designed to allow for development into features which  out  ZeroMQ  topologies  can’t
       match.

       Many  of  the  proposed features are still under development and will be announced as they
       enter proof of concept phases, but these features include salt-fuse -  a  filesystem  over
       salt, salt-vt - a parallel api driven shell over the salt transport and many others.

   RAET Reliability
       RAET is reliable, hence the name (Reliable Asynchronous Event Transport).

       The  concern  posed  by some over RAET reliability is based on the fact that RAET uses UDP
       instead of TCP and UDP does not have built in reliability.

       RAET itself implements the needed reliability layers that are not natively present in UDP,
       this  allows  RAET  to  dynamically  optimize  packet delivery in a way that keeps it both
       reliable and asynchronous.

   RAET and ZeroMQ
       When using RAET, ZeroMQ is not required. RAET is a complete networking replacement. It  is
       noteworthy that RAET is not a ZeroMQ replacement in a general sense, the ZeroMQ constructs
       are not reproduced in RAET, but they are  instead  implemented  in  such  a  way  that  is
       specific to Salt’s needs.

       RAET is primarily an async communication layer over truly async connections, defaulting to
       UDP. ZeroMQ is over TCP and abstracts async constructs within the socket layer.

       Salt is not dropping ZeroMQ support and has no immediate plans to do so.

   Encryption
       RAET uses Dan Bernstein’s NACL encryption libraries and CurveCP  handshake.   The  libnacl
       python   binding  binds  to  both  libsodium  and  tweetnacl  to  execute  the  underlying
       cryptography. This allows us to completely rely on an  externally  developed  cryptography
       system.

   Programming Intro
   Intro to RAET Programming
       NOTE:
          This page is still under construction

       The  first  thing  to  cover  is that RAET does not present a socket api, it presents, and
       queueing api, all messages in RAET are made available to via queues. This  is  the  single
       most  differentiating  factor with RAET vs other networking libraries, instead of making a
       socket, a stack is created.  Instead of calling send() or recv(), messages are  placed  on
       the stack to be sent and messages that are received appear on the stack.

       Different  kinds  of stacks are also available, currently two stacks exist, the UDP stack,
       and the UXD stack. The UDP stack is used to communicate over  udp  sockets,  and  the  UXD
       stack is used to communicate over Unix Domain Sockets.

       The  UDP  stack  runs  a  context for communicating over networks, while the UXD stack has
       contexts for communicating between processes.

   UDP Stack Messages
       To create a UDP stack in RAET, simply create the stack, manage  the  queues,  and  process
       messages:

          from salt.transport.road.raet import stacking
          from salt.transport.road.raet import estating

          udp_stack = stacking.StackUdp(ha=('127.0.0.1', 7870))
          r_estate = estating.Estate(stack=stack, name='foo', ha=('192.168.42.42', 7870))
          msg = {'hello': 'world'}
          udp_stack.transmit(msg, udp_stack.estates[r_estate.name])
          udp_stack.serviceAll()

   Master Tops System
       In  0.10.4  the  external_nodes  system was upgraded to allow for modular subsystems to be
       used to generate the top file data for a highstate run on the master.

       The old external_nodes option  has  been  removed.  The  master  tops  system  provides  a
       pluggable and extendable replacement for it, allowing for multiple different subsystems to
       provide top file data.

       Using the new master_tops option is simple:

          master_tops:
            ext_nodes: cobbler-external-nodes

       for Cobbler or:

          master_tops:
            reclass:
              inventory_base_uri: /etc/reclass
              classes_uri: roles

       for Reclass.

          master_tops:
            varstack: /path/to/the/config/file/varstack.yaml

       for Varstack.

       It’s  also  possible  to  create  custom  master_tops  modules.  Simply  place  them  into
       salt://_tops in the Salt fileserver and use the saltutil.sync_tops runner to sync them. If
       this runner function is not available, they can  manually  be  placed  into  extmods/tops,
       relative   to   the   master   cachedir   (in   most   cases   the   full   path  will  be
       /var/cache/salt/master/extmods/tops).

       Custom tops modules are written like any other execution module, see the  source  for  the
       two modules above for examples of fully functional ones. Below is a bare-bones example:

       /etc/salt/master:

          master_tops:
            customtop: True

       customtop.py: (custom master_tops module)

          import logging
          import sys
          # Define the module's virtual name
          __virtualname__ = 'customtop'

          log = logging.getLogger(__name__)

          def __virtual__():
              return __virtualname__

          def top(**kwargs):
              log.debug('Calling top in customtop')
              return {'base': ['test']}

       salt minion state.show_top should then display something like:

          $ salt minion state.show_top

          minion
              ----------
              base:
                - test

       NOTE:
          If  a  master_tops module returns top file data for a given minion, it will be added to
          the states configured in the top file. It will not replace it altogether. The  2018.3.0
          release  adds  additional  functionality  allowing a minion to treat master_tops as the
          single source of truth, irrespective of the top file.

   Returners
       By default the return values of the commands sent to the Salt minions are returned to  the
       Salt master, however anything at all can be done with the results data.

       By  using  a  Salt  returner,  results  data can be redirected to external data-stores for
       analysis and archival.

       Returners pull their configuration values  from  the  Salt  minions.  Returners  are  only
       configured once, which is generally at load time.

       The  returner  interface  allows the return data to be sent to any system that can receive
       data. This means that return data can be sent to a Redis server, a MongoDB server, a MySQL
       server, or any system.

       SEE ALSO:
          Full list of builtin returners

   Using Returners
       All  Salt  commands  will return the command data back to the master. Specifying returners
       will ensure that the data is _also_ sent to the specified returner interfaces.

       Specifying what returners to use is done when the command is invoked:

          salt '*' test.ping --return redis_return

       This command will ensure that the redis_return returner is used.

       It is also possible to specify multiple returners:

          salt '*' test.ping --return mongo_return,redis_return,cassandra_return

       In this scenario all three returners will be  called  and  the  data  from  the  test.ping
       command will be sent out to the three named returners.

   Writing a Returner
       Returners  are  Salt  modules  that allow the redirection of results data to targets other
       than the Salt Master.

   Returners Are Easy To Write!
       Writing a Salt returner is straightforward.

       A returner is a Python module containing at minimum a returner function.   Other  optional
       functions  can  be  included  to add support for master_job_cache, external-job-cache, and
       Event Returners.

       returner
              The returner function must accept a single argument. The argument  contains  return
              data  from  the called minion function. If the minion function test.ping is called,
              the value of the argument will be a dictionary. Run the following  command  from  a
              Salt master to get a sample of the dictionary:

          salt-call --local --metadata test.ping --out=pprint

          import redis
          import salt.utils.json

          def returner(ret):
              '''
              Return information to a redis server
              '''
              # Get a redis connection
              serv = redis.Redis(
                  host='redis-serv.example.com',
                  port=6379,
                  db='0')
              serv.sadd("%(id)s:jobs" % ret, ret['jid'])
              serv.set("%(jid)s:%(id)s" % ret, salt.utils.json.dumps(ret['return']))
              serv.sadd('jobs', ret['jid'])
              serv.sadd(ret['jid'], ret['id'])

       The above example of a returner set to send the data to a Redis server serializes the data
       as JSON and sets it in redis.

   Using Custom Returner Modules
       Place custom returners in a _returners/ directory within the file_roots specified  by  the
       master config file.

       Custom returners are distributed when any of the following are called:

       · state.apply

       · saltutil.sync_returners

       · saltutil.sync_all

       Any  custom returners which have been synced to a minion that are named the same as one of
       Salt’s default set of returners will take the place of the default returner with the  same
       name.

   Naming the Returner
       Note  that  a  returner’s default name is its filename (i.e. foo.py becomes returner foo),
       but that its name can be overridden by using a __virtual__ function.  A  good  example  of
       this  can  be found in the redis returner, which is named redis_return.py but is loaded as
       simply redis:

          try:
              import redis
              HAS_REDIS = True
          except ImportError:
              HAS_REDIS = False

          __virtualname__ = 'redis'

          def __virtual__():
              if not HAS_REDIS:
                  return False
              return __virtualname__

   Master Job Cache Support
       master_job_cache, external-job-cache, and Event Returners.  Salt’s master_job_cache allows
       returners  to be used as a pluggable replacement for the default_job_cache. In order to do
       so, a returner must implement the following functions:

       NOTE:
          The code samples contained in this section were taken from the cassandra_cql returner.

       prep_jid
              Ensures that job ids (jid) don’t collide, unless passed_jid is provided.

              nochache is an optional boolean that indicates if return  data  should  be  cached.
              passed_jid is a caller provided jid which should be returned unconditionally.

          def prep_jid(nocache, passed_jid=None):  # pylint: disable=unused-argument
              '''
              Do any work necessary to prepare a JID, including sending a custom id
              '''
              return passed_jid if passed_jid is not None else salt.utils.jid.gen_jid()

       save_load
              Save  job information.  The jid is generated by prep_jid and should be considered a
              unique identifier for the  job.  The  jid,  for  example,  could  be  used  as  the
              primary/unique  key in a database. The load is what is returned to a Salt master by
              a minion. minions is a list of minions that the job was run against. The  following
              code example stores the load as a JSON string in the salt.jids table.

          import salt.utils.json

          def save_load(jid, load, minions=None):
              '''
              Save the load to the specified jid id
              '''
              query = '''INSERT INTO salt.jids (
                           jid, load
                         ) VALUES (
                           '{0}', '{1}'
                         );'''.format(jid, salt.utils.json.dumps(load))

              # cassandra_cql.cql_query may raise a CommandExecutionError
              try:
                  __salt__['cassandra_cql.cql_query'](query)
              except CommandExecutionError:
                  log.critical('Could not save load in jids table.')
                  raise
              except Exception as e:
                  log.critical(
                      'Unexpected error while inserting into jids: {0}'.format(e)
                  )
                  raise

       get_load
              must accept a job id (jid) and return the job load stored by save_load, or an empty
              dictionary when not found.

          def get_load(jid):
              '''
              Return the load data that marks a specified jid
              '''
              query = '''SELECT load FROM salt.jids WHERE jid = '{0}';'''.format(jid)

              ret = {}

              # cassandra_cql.cql_query may raise a CommandExecutionError
              try:
                  data = __salt__['cassandra_cql.cql_query'](query)
                  if data:
                      load = data[0].get('load')
                      if load:
                          ret = json.loads(load)
              except CommandExecutionError:
                  log.critical('Could not get load from jids table.')
                  raise
              except Exception as e:
                  log.critical('''Unexpected error while getting load from
                   jids: {0}'''.format(str(e)))
                  raise

              return ret

   External Job Cache Support
       Salt’s  external-job-cache  extends  the  master_job_cache.  External  Job  Cache  support
       requires  the  following  functions  in  addition to what is required for Master Job Cache
       support:

       get_jid
              Return a dictionary containing the information (load) returned by each minion  when
              the specified job id was executed.

       Sample:

          {
              "local": {
                  "master_minion": {
                      "fun_args": [],
                      "jid": "20150330121011408195",
                      "return": true,
                      "retcode": 0,
                      "success": true,
                      "cmd": "_return",
                      "_stamp": "2015-03-30T12:10:12.708663",
                      "fun": "test.ping",
                      "id": "master_minion"
                  }
              }
          }

       get_fun
              Return  a  dictionary  of  minions  that called a given Salt function as their last
              function call.

       Sample:

          {
              "local": {
                  "minion1": "test.ping",
                  "minion3": "test.ping",
                  "minion2": "test.ping"
              }
          }

       get_jids
              Return a list of all job ids.

       Sample:

          {
              "local": [
                  "20150330121011408195",
                  "20150330195922139916"
              ]
          }

       get_minions
              Returns a list of minions

       Sample:

          {
               "local": [
                   "minion3",
                   "minion2",
                   "minion1",
                   "master_minion"
               ]
          }

       Please refer to one or more of the existing returners (i.e. mysql, cassandra_cql)  if  you
       need further clarification.

   Event Support
       An event_return function must be added to the returner module to allow events to be logged
       from a master via the returner. A list of events are passed to the function by the master.

       The following example was taken from the MySQL returner. In this example,  each  event  is
       inserted  into  the salt_events table keyed on the event tag. The tag contains the jid and
       therefore is guaranteed to be unique.

          import salt.utils.json

          def event_return(events):
           '''
           Return event to mysql server

           Requires that configuration be enabled via 'event_return'
           option in master config.
           '''
           with _get_serv(events, commit=True) as cur:
               for event in events:
                   tag = event.get('tag', '')
                   data = event.get('data', '')
                   sql = '''INSERT INTO `salt_events` (`tag`, `data`, `master_id` )
                            VALUES (%s, %s, %s)'''
                   cur.execute(sql, (tag, salt.utils.json.dumps(data), __opts__['id']))

   Testing the Returner
       The returner, prep_jid, save_load, get_load, and event_return functions can be  tested  by
       configuring  the  master_job_cache  and  Event  Returners  in  the  master config file and
       submitting a job to test.ping each minion from the master.

       Once you have successfully exercised the Master Job Cache functions, test the External Job
       Cache functions using the ret execution module.

          salt-call ret.get_jids cassandra_cql --output=json
          salt-call ret.get_fun cassandra_cql test.ping --output=json
          salt-call ret.get_minions cassandra_cql --output=json
          salt-call ret.get_jid cassandra_cql 20150330121011408195 --output=json

   Event Returners
       For maximum visibility into the history of events across a Salt infrastructure, all events
       seen by a salt master may be logged to one or more returners.

       To enable event logging, set the event_return configuration option in the master config to
       the returner(s) which should be designated as the handler for event returns.

       NOTE:
          Not  all  returners  support  event  returns.  Verify  a returner has an event_return()
          function before using.

       NOTE:
          On larger installations, many hundreds of events may be  generated  on  a  busy  master
          every second. Be certain to closely monitor the storage of a given returner as Salt can
          easily overwhelm an underpowered server with thousands of returns.

   Full List of Returners
   returner modules
                       ┌─────────────────────┬──────────────────────────────────┐
                       │carbon_return        │ Take data from salt and “return” │
                       │                     │ it into a carbon receiver        │
                       ├─────────────────────┼──────────────────────────────────┤
                       │cassandra_cql_return │ Return   data   to  a  cassandra │
                       │                     │ server                           │
                       ├─────────────────────┼──────────────────────────────────┤
                       │cassandra_return     │ Return  data  to   a   Cassandra │
                       │                     │ ColumnFamily                     │
                       ├─────────────────────┼──────────────────────────────────┤
                       │couchbase_return     │ Simple returner for Couchbase.   │
                       ├─────────────────────┼──────────────────────────────────┤
                       │couchdb_return       │ Simple returner for CouchDB.     │
                       ├─────────────────────┼──────────────────────────────────┤
                       │django_return        │ A  returner  that  will inform a │
                       │                     │ Django system that  returns  are │
                       │                     │ available  using Django’s signal │
                       │                     │ system.                          │
                       ├─────────────────────┼──────────────────────────────────┤
                       │elasticsearch_return │ Return data to an  elasticsearch │
                       │                     │ server for indexing.             │
                       ├─────────────────────┼──────────────────────────────────┤
                       │etcd_return          │ Return data to an etcd server or │
                       │                     │ cluster                          │
                       ├─────────────────────┼──────────────────────────────────┤
                       │highstate_return     │ Return   the   results   of    a │
                       │                     │ highstate  (or  any  other state │
                       │                     │ function that returns data in  a │
                       │                     │ compatible  format)  via an HTML │
                       │                     │ email or HTML file.              │
                       ├─────────────────────┼──────────────────────────────────┤
                       │hipchat_return       │ Return salt data via hipchat.    │
                       ├─────────────────────┼──────────────────────────────────┤
                       │influxdb_return      │ Return  data  to   an   influxdb │
                       │                     │ server.                          │
                       ├─────────────────────┼──────────────────────────────────┤
                       │kafka_return         │ Return data to a Kafka topic     │
                       ├─────────────────────┼──────────────────────────────────┤
                       │librato_return       │ Salt    returner    to    return │
                       │                     │ highstate stats to Librato       │
                       ├─────────────────────┼──────────────────────────────────┤
                       │local                │ The local returner  is  used  to │
                       │                     │ test  the returner interface, it │
                       │                     │ just prints the return  data  to │
                       │                     │ the console to verify that it is │
                       │                     │ being passed properly            │
                       ├─────────────────────┼──────────────────────────────────┤
                       │local_cache          │ Return data to local job cache   │
                       ├─────────────────────┼──────────────────────────────────┤
                       │mattermost_returner  │ Return salt data via mattermost  │
                       ├─────────────────────┼──────────────────────────────────┤
                       │memcache_return      │ Return data to a memcache server │
                       ├─────────────────────┼──────────────────────────────────┤
                       │mongo_future_return  │ Return data to a mongodb server  │
                       ├─────────────────────┼──────────────────────────────────┤
                       │mongo_return         │ Return data to a mongodb server  │
                       ├─────────────────────┼──────────────────────────────────┤
                       │multi_returner       │ Read/Write multiple returners    │
                       ├─────────────────────┼──────────────────────────────────┤
                       │mysql                │ Return data to a mysql server    │
                       ├─────────────────────┼──────────────────────────────────┤
                       │nagios_return        │ Return salt data to Nagios       │
                       ├─────────────────────┼──────────────────────────────────┤
                       │odbc                 │ Return data to an ODBC compliant │
                       │                     │ server.                          │
                       ├─────────────────────┼──────────────────────────────────┤
                       │pgjsonb              │ Return   data  to  a  PostgreSQL │
                       │                     │ server with json data stored  in │
                       │                     │ Pg’s jsonb data type             │
                       ├─────────────────────┼──────────────────────────────────┤
                       │postgres             │ Return   data  to  a  postgresql │
                       │                     │ server                           │
                       ├─────────────────────┼──────────────────────────────────┤
                       │postgres_local_cache │ Use a postgresql server for  the │
                       │                     │ master job cache.                │
                       ├─────────────────────┼──────────────────────────────────┤
                       │pushover_returner    │ Return salt data via pushover (‐ │
                       │                     │ http://www.pushover.net)         │
                       ├─────────────────────┼──────────────────────────────────┤
                       │rawfile_json         │ Take data from salt and “return” │
                       │                     │ it  into  a  raw file containing │
                       │                     │ the  json,  with  one  line  per │
                       │                     │ event.                           │
                       ├─────────────────────┼──────────────────────────────────┤
                       │redis_return         │ Return data to a redis server    │
                       ├─────────────────────┼──────────────────────────────────┤
                       │sentry_return        │ Salt   returner   that   reports │
                       │                     │ execution   results   back    to │
                       │                     │ sentry.                          │
                       ├─────────────────────┼──────────────────────────────────┤
                       │slack_returner       │ Return salt data via slack       │
                       ├─────────────────────┼──────────────────────────────────┤
                       │sms_return           │ Return data by SMS.              │
                       ├─────────────────────┼──────────────────────────────────┤
                       │smtp_return          │ Return salt data via email       │
                       ├─────────────────────┼──────────────────────────────────┤
                       │splunk               │ Send   json   response  data  to │
                       │                     │ Splunk  via   the   HTTP   Event │
                       │                     │ Collector Requires the following │
                       │                     │ config values to be specified in │
                       │                     │ config or pillar:                │
                       ├─────────────────────┼──────────────────────────────────┤
                       │sqlite3_return       │ Insert minion return data into a │
                       │                     │ sqlite3 database                 │
                       └─────────────────────┴──────────────────────────────────┘

                       │syslog_return        │ Return   data   to   the    host │
                       │                     │ operating     system’s    syslog │
                       │                     │ facility                         │
                       ├─────────────────────┼──────────────────────────────────┤
                       │telegram_return      │ Return salt data via Telegram.   │
                       ├─────────────────────┼──────────────────────────────────┤
                       │xmpp_return          │ Return salt data via xmpp        │
                       ├─────────────────────┼──────────────────────────────────┤
                       │zabbix_return        │ Return salt data to Zabbix       │
                       └─────────────────────┴──────────────────────────────────┘

   salt.returners.carbon_return
       Take data from salt and “return” it into a carbon receiver

       Add the following configuration to the minion configuration file:

          carbon.host: <server ip address>
          carbon.port: 2003

       Errors  when  trying  to  convert  data   to   numbers   may   be   ignored   by   setting
       carbon.skip_on_error to True:

          carbon.skip_on_error: True

       By  default,  data  will be sent to carbon using the plaintext protocol. To use the pickle
       protocol, set carbon.mode to pickle:

          carbon.mode: pickle

       You can also specify the pattern used for the metric base path (except  for  virt  modules
       metrics):
              carbon.metric_base_pattern: carbon.[minion_id].[module].[function]

       These tokens can used :
              [module]: salt module [function]: salt function [minion_id]: minion id

       Default is :
              carbon.metric_base_pattern: [module].[function].[minion_id]

       Carbon settings may also be configured as:

          carbon:
            host: <server IP or hostname>
            port: <carbon port>
            skip_on_error: True
            mode: (pickle|text)
            metric_base_pattern: <pattern> | [module].[function].[minion_id]

       Alternative  configuration  values can be used by prefacing the configuration.  Any values
       not found in the alternative configuration will be pulled from the default location:

          alternative.carbon:
            host: <server IP or hostname>
            port: <carbon port>
            skip_on_error: True
            mode: (pickle|text)

       To use the carbon returner, append ‘–return carbon’ to the salt command.

          salt '*' test.ping --return carbon

       To use the alternative configuration, append  ‘–return_config  alternative’  to  the  salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return carbon --return_config alternative

       To  override  individual configuration items, append –return_kwargs ‘{“key:”: “value”}’ to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return carbon --return_kwargs '{"skip_on_error": False}'

       salt.returners.carbon_return.event_return(events)
              Return event data to remote carbon server

              Provide a list of events to be stored in carbon

       salt.returners.carbon_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.carbon_return.returner(ret)
              Return data to a remote carbon server using the text metric protocol

              Each metric will look like:

                 [module].[function].[minion_id].[metric path [...]].[metric name]

   salt.returners.cassandra_cql_return
       Return data to a cassandra server

       New in version 2015.5.0.

       maintainer
              Corin Kochenower<ckochenower@saltstack.com>

       maturity
              new as of 2015.2

       depends
              salt.modules.cassandra_cql

       depends
              DataStax         Python         Driver         for         Apache         Cassandra
              https://github.com/datastax/python-driver pip install cassandra-driver

       platform
              all

       configuration
              To enable this returner, the minion will need the DataStax Python Driver for Apache
              Cassandra ( https://github.com/datastax/python-driver ) installed and the following
              values  configured  in  the  minion  or master config. The list of cluster IPs must
              include at least one cassandra node IP address. No assumption or  default  will  be
              used  for  the cluster IPs.  The cluster IPs will be tried in the order listed. The
              port, username, and password values shown below will be the assumed defaults if you
              do not provide values.:

                 cassandra:
                   cluster:
                     - 192.168.50.11
                     - 192.168.50.12
                     - 192.168.50.13
                   port: 9042
                   username: salt
                   password: salt

              Use the following cassandra database schema:

                 CREATE KEYSPACE IF NOT EXISTS salt
                     WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 1};

                 CREATE USER IF NOT EXISTS salt WITH PASSWORD 'salt' NOSUPERUSER;

                 GRANT ALL ON KEYSPACE salt TO salt;

                 USE salt;

                 CREATE TABLE IF NOT EXISTS salt.salt_returns (
                     jid text,
                     minion_id text,
                     fun text,
                     alter_time timestamp,
                     full_ret text,
                     return text,
                     success boolean,
                     PRIMARY KEY (jid, minion_id, fun)
                 ) WITH CLUSTERING ORDER BY (minion_id ASC, fun ASC);
                 CREATE INDEX IF NOT EXISTS salt_returns_minion_id ON salt.salt_returns (minion_id);
                 CREATE INDEX IF NOT EXISTS salt_returns_fun ON salt.salt_returns (fun);

                 CREATE TABLE IF NOT EXISTS salt.jids (
                     jid text PRIMARY KEY,
                     load text
                 );

                 CREATE TABLE IF NOT EXISTS salt.minions (
                     minion_id text PRIMARY KEY,
                     last_fun text
                 );
                 CREATE INDEX IF NOT EXISTS minions_last_fun ON salt.minions (last_fun);

                 CREATE TABLE IF NOT EXISTS salt.salt_events (
                     id timeuuid,
                     tag text,
                     alter_time timestamp,
                     data text,
                     master_id text,
                     PRIMARY KEY (id, tag)
                 ) WITH CLUSTERING ORDER BY (tag ASC);
                 CREATE INDEX tag ON salt.salt_events (tag);

       Required python modules: cassandra-driver

       To use the cassandra returner, append ‘–return cassandra_cql’ to the salt command. ex:

          salt '*' test.ping --return_cql cassandra

       Note:  if  your  Cassandra  instance has not been tuned much you may benefit from altering
       some timeouts in cassandra.yaml like so:

          # How long the coordinator should wait for read operations to complete
          read_request_timeout_in_ms: 5000
          # How long the coordinator should wait for seq or index scans to complete
          range_request_timeout_in_ms: 20000
          # How long the coordinator should wait for writes to complete
          write_request_timeout_in_ms: 20000
          # How long the coordinator should wait for counter writes to complete
          counter_write_request_timeout_in_ms: 10000
          # How long a coordinator should continue to retry a CAS operation
          # that contends with other proposals for the same row
          cas_contention_timeout_in_ms: 5000
          # How long the coordinator should wait for truncates to complete
          # (This can be much longer, because unless auto_snapshot is disabled
          # we need to flush first so we can snapshot before removing the data.)
          truncate_request_timeout_in_ms: 60000
          # The default timeout for other, miscellaneous operations
          request_timeout_in_ms: 20000

       As always, your mileage may vary and your Cassandra  cluster  may  have  different  needs.
       SaltStack  has  seen  situations  where  these  timeouts can resolve some stacktraces that
       appear to come from the Datastax Python driver.

       salt.returners.cassandra_cql_return.event_return(events)
              Return event to one of potentially many clustered cassandra nodes

              Requires that configuration be enabled via ‘event_return’ option in master config.

              Cassandra does not support an auto-increment feature due to the highly  inefficient
              nature  of  creating  a  monotonically  increasing  number  across  all  nodes in a
              distributed database. Each event will be assigned a uuid by the connecting client.

       salt.returners.cassandra_cql_return.get_fun(fun)
              Return a dict of the last function called for all minions

       salt.returners.cassandra_cql_return.get_jid(jid)
              Return the information returned when the specified job id was executed

       salt.returners.cassandra_cql_return.get_jids()
              Return a list of all job ids

       salt.returners.cassandra_cql_return.get_load(jid)
              Return the load data that marks a specified jid

       salt.returners.cassandra_cql_return.get_minions()
              Return a list of minions

       salt.returners.cassandra_cql_return.prep_jid(nocache, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.cassandra_cql_return.returner(ret)
              Return data to one of potentially many clustered cassandra nodes

       salt.returners.cassandra_cql_return.save_load(jid, load, minions=None)
              Save the load to the specified jid id

   salt.returners.cassandra_return
       Return data to a Cassandra ColumnFamily

       Here’s an example Keyspace / ColumnFamily setup that works with this returner:

          create keyspace salt;
          use salt;
          create column family returns
            with key_validation_class='UTF8Type'
            and comparator='UTF8Type'
            and default_validation_class='UTF8Type';

       Required python modules: pycassa
          To use the cassandra returner, append ‘–return cassandra’ to the salt command. ex:
              salt ‘*’ test.ping –return cassandra

       salt.returners.cassandra_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.cassandra_return.returner(ret)
              Return data to a Cassandra ColumnFamily

   salt.returners.couchbase_return
       Simple returner for Couchbase. Optional configuration settings  are  listed  below,  along
       with sane defaults.

          couchbase.host:   'salt'
          couchbase.port:   8091
          couchbase.bucket: 'salt'
          couchbase.ttl: 24
          couchbase.password: 'password'
          couchbase.skip_verify_views: False

       To use the couchbase returner, append ‘–return couchbase’ to the salt command. ex:

          salt '*' test.ping --return couchbase

       To  use  the  alternative  configuration,  append ‘–return_config alternative’ to the salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return couchbase --return_config alternative

       To override individual configuration items, append –return_kwargs ‘{“key:”:  “value”}’  to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return couchbase --return_kwargs '{"bucket": "another-salt"}'

       All of the return data will be stored in documents as follows:

   JID
       load:  load  obj  tgt_minions:  list  of minions targeted nocache: should we not cache the
       return data

   JID/MINION_ID
       return: return_data full_ret: full load of job return

       salt.returners.couchbase_return.get_jid(jid)
              Return the information returned when the specified job id was executed

       salt.returners.couchbase_return.get_jids()
              Return a list of all job ids

       salt.returners.couchbase_return.get_load(jid)
              Return the load data that marks a specified jid

       salt.returners.couchbase_return.prep_jid(nocache=False, passed_jid=None)
              Return a job id and prepare the job id directory This is the  function  responsible
              for making sure jids don’t collide (unless its passed a jid) So do what you have to
              do to make sure that stays the case

       salt.returners.couchbase_return.returner(load)
              Return data to couchbase bucket

       salt.returners.couchbase_return.save_load(jid, clear_load, minion=None)
              Save the load to the specified jid

       salt.returners.couchbase_return.save_minions(jid, minions, syndic_id=None)
              Save/update the minion list for a given jid. The syndic_id argument is included for
              API compatibility only.

   salt.returners.couchdb_return
       Simple  returner for CouchDB. Optional configuration settings are listed below, along with
       sane defaults:

          couchdb.db: 'salt'
          couchdb.url: 'http://salt:5984/'

       Alternative configuration values can be used by prefacing the configuration.   Any  values
       not found in the alternative configuration will be pulled from the default location:

          alternative.couchdb.db: 'salt'
          alternative.couchdb.url: 'http://salt:5984/'

       To use the couchdb returner, append --return couchdb to the salt command. Example:

          salt '*' test.ping --return couchdb

       To  use  the  alternative  configuration,  append  --return_config alternative to the salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return couchdb --return_config alternative

       To override individual configuration items, append –return_kwargs ‘{“key:”:  “value”}’  to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return couchdb --return_kwargs '{"db": "another-salt"}'

   On concurrent database access
       As  this  returner  creates  a couchdb document with the salt job id as document id and as
       only one document with a given id can exist in a given couchdb database, it is advised for
       most  setups  that  every  minion  be configured to write to it own database (the value of
       couchdb.db may be suffixed with the minion id), otherwise multi-minion targeting can  lead
       to losing output:

       · the first returning minion is able to create a document in the database

       · other minions fail with {'error': 'HTTP Error 409: Conflict'}

       salt.returners.couchdb_return.ensure_views()
              This  function  makes  sure  that  all  the  views  that should exist in the design
              document do exist.

       salt.returners.couchdb_return.get_fun(fun)
              Return a dict with key being minion and value being the job details of the last run
              of function ‘fun’.

       salt.returners.couchdb_return.get_jid(jid)
              Get the document with a given JID.

       salt.returners.couchdb_return.get_jids()
              List all the jobs that we have..

       salt.returners.couchdb_return.get_minions()
              Return a list of minion identifiers from a request of the view.

       salt.returners.couchdb_return.get_valid_salt_views()
              Returns a dict object of views that should be part of the salt design document.

       salt.returners.couchdb_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.couchdb_return.returner(ret)
              Take in the return and shove it into the couchdb database.

       salt.returners.couchdb_return.set_salt_view()
              Helper  function  that sets the salt design document. Uses get_valid_salt_views and
              some hardcoded values.

   salt.returners.django_return
       A returner that will inform a Django system that  returns  are  available  using  Django’s
       signal system.

       https://docs.djangoproject.com/en/dev/topics/signals/

       It  is up to the Django developer to register necessary handlers with the signals provided
       by this returner and process returns as necessary.

       The easiest way to use signals is to import them from this returner directly and then  use
       a decorator to register them.

       An  example  Django  module that registers a function called ‘returner_callback’ with this
       module’s ‘returner’ function:

          import salt.returners.django_return
          from django.dispatch import receiver

          @receiver(salt.returners.django_return, sender=returner)
          def returner_callback(sender, ret):
              print('I received {0} from {1}'.format(ret, sender))

       salt.returners.django_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom ID

       salt.returners.django_return.returner(ret)
              Signal a Django server that a return is available

       salt.returners.django_return.save_load(jid, load, minions=None)
              Save the load to the specified jid

   salt.returners.elasticsearch_return
       Return data to an elasticsearch server for indexing.

       maintainer
              Jurnell   Cockhren   <jurnell.cockhren@sophicware.com>,   Arnold    Bechtoldt    <‐
              mail@arnoldbechtoldt.com>

       maturity
              New

       depends
              elasticsearch-py

       platform
              all

       To  enable  this returner the elasticsearch python client must be installed on the desired
       minions (all or some subset).

       Please see  documentation  of  elasticsearch  execution  module  for  a  valid  connection
       configuration.

       WARNING:
          The  index  that  you  wish  to  store  documents  will  be  created  by  Elasticsearch
          automatically if doesn’t exist yet. It is highly recommended to create predefined index
          templates  with  appropriate  mapping(s)  that will be used by Elasticsearch upon index
          creation. Otherwise you will have problems as described in #20826.

       To use the returner per salt call:

          salt '*' test.ping --return elasticsearch

       In order to have the returner apply to all minions:

          ext_job_cache: elasticsearch

       Minion configuration:

              debug_returner_payload’: False
                     Output the payload being posted to the log file in debug mode

              doc_type: ‘default’
                     Document type to use for normal return messages

              functions_blacklist
                     Optional list of functions that should not be returned to elasticsearch

              index_date: False
                     Use a dated index (e.g. <index>-2016.11.29)

              master_event_index: ‘salt-master-event-cache’
                     Index to use when returning master events

              master_event_doc_type: ‘efault’
                     Document type to use got master events

              master_job_cache_index: ‘salt-master-job-cache’
                     Index to use for master job cache

              master_job_cache_doc_type: ‘default’
                     Document type to use for master job cache

              number_of_shards: 1
                     Number of shards to use for the indexes

              number_of_replicas: 0
                     Number of replicas to use for the indexes

              NOTE:  The  following  options  are  valid  for  ‘state.apply’,   ‘state.sls’   and
              ‘state.highstate’ functions only.

              states_count: False
                     Count  the  number  of  states  which  succeeded  or failed and return it in
                     top-level item called ‘counts’.  States reporting None (i.e.  changes  would
                     be made but it ran in test mode) are counted as successes.

              states_order_output: False
                     Prefix  the state UID (e.g. file_|-yum_configured_|-/etc/yum.conf_|-managed)
                     with a zero-padded version of the ‘__run_num__’ value to  allow  for  easier
                     sorting.  Also  store  the state function (i.e. file.managed) into a new key
                     ‘_func’.    Change    the    index    to    be    ‘<index>-ordered’    (e.g.
                     salt-state_apply-ordered).

              states_single_index: False
                     Store   results  for  state.apply,  state.sls  and  state.highstate  in  the
                     salt-state_apply index (or -ordered/-<date>) indexes if enabled

          elasticsearch:
              hosts:
                - "10.10.10.10:9200"
                - "10.10.10.11:9200"
                - "10.10.10.12:9200"
              index_date: True
              number_of_shards: 5
              number_of_replicas: 1
              debug_returner_payload: True
              states_count: True
              states_order_output: True
              states_single_index: True
              functions_blacklist:
                - test.ping
                - saltutil.find_job

       salt.returners.elasticsearch_return.event_return(events)
              Return events to Elasticsearch

              Requires that the event_return configuration be set in master config.

       salt.returners.elasticsearch_return.get_load(jid)
              Return the load data that marks a specified jid

              New in version 2015.8.1.

       salt.returners.elasticsearch_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.elasticsearch_return.returner(ret)
              Process the return from Salt

       salt.returners.elasticsearch_return.save_load(jid, load, minions=None)
              Save the load to the specified jid id

              New in version 2015.8.1.

   salt.returners.etcd_return
       Return data to an etcd server or cluster

       depends

              · python-etcd

       In order to return to  an  etcd  server,  a  profile  should  be  created  in  the  master
       configuration file:

          my_etcd_config:
            etcd.host: 127.0.0.1
            etcd.port: 4001

       It  is  technically  possible  to  configure etcd without using a profile, but this is not
       considered to be a best practice, especially when multiple etcd servers  or  clusters  are
       available.

          etcd.host: 127.0.0.1
          etcd.port: 4001

       Additionally,  two  more options must be specified in the top-level configuration in order
       to use the etcd returner:

          etcd.returner: my_etcd_config
          etcd.returner_root: /salt/return

       The  etcd.returner  option   specifies   which   configuration   profile   to   use.   The
       etcd.returner_root  option  specifies  the  path  inside  etcd  to  use as the root of the
       returner system.

       Once the etcd options are configured, the returner may be used:

       CLI Example:
          salt ‘*’ test.ping –return etcd

       A username and password can be set:

          etcd.username: larry  # Optional; requires etcd.password to be set
          etcd.password: 123pass  # Optional; requires etcd.username to be set

       You can also set a TTL (time to live) value for the returner:

          etcd.ttl: 5

       Authentication with username and password, and ttl, currently requires the  master  branch
       of python-etcd.

       You  may  also  specify  different  roles for read and write operations. First, create the
       profiles as specified above. Then add:

          etcd.returner_read_profile: my_etcd_read
          etcd.returner_write_profile: my_etcd_write

       salt.returners.etcd_return.get_fun()
              Return a dict of the last function called for all minions

       salt.returners.etcd_return.get_jid(jid)
              Return the information returned when the specified job id was executed

       salt.returners.etcd_return.get_jids()
              Return a list of all job ids

       salt.returners.etcd_return.get_load(jid)
              Return the load data that marks a specified jid

       salt.returners.etcd_return.get_minions()
              Return a list of minions

       salt.returners.etcd_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.etcd_return.returner(ret)
              Return data to an etcd server or cluster

       salt.returners.etcd_return.save_load(jid, load, minions=None)
              Save the load to the specified jid

   salt.returners.highstate_return module
       Return the results of a highstate (or any other state function  that  returns  data  in  a
       compatible format) via an HTML email or HTML file.

       New in version 2017.7.0.

       Similar  results can be achieved by using the smtp returner with a custom template, except
       an attempt at writing such a template for the complex data structure returned by highstate
       function had proven to be a challenge, not to mention that the smtp module doesn’t support
       sending HTML mail at the moment.

       The main goal of this returner was to produce an easy to read email similar to the  output
       of highstate outputter used by the CLI.

       This  returner  could be very useful during scheduled executions, but could also be useful
       for communicating the results of a manual execution.

       Returner configuration is controlled in a standard fashion either via highstate  group  or
       an alternatively named group.

          salt '*' state.highstate --return highstate

       To use the alternative configuration, append ‘–return_config config-name’

          salt '*' state.highstate --return highstate --return_config simple

       Here is an example of what the configuration might look like:

          simple.highstate:
            report_failures: True
            report_changes: True
            report_everything: False
            failure_function: pillar.items
            success_function: pillar.items
            report_format: html
            report_delivery: smtp
            smtp_success_subject: 'success minion {id} on host {host}'
            smtp_failure_subject: 'failure minion {id} on host {host}'
            smtp_server: smtp.example.com
            smtp_recipients: saltusers@example.com, devops@example.com
            smtp_sender: salt@example.com

       The  report_failures, report_changes, and report_everything flags provide filtering of the
       results. If you want an email to be  sent  every  time,  then  report_everything  is  your
       choice.  If  you  want  to  be  notified  only  when  changes  were  successfully made use
       report_changes. And report_failures will generate an email if there were failures.

       The  configuration  allows  you  to  run  a  salt  module  function  in  case  of  success
       (success_function) or failure (failure_function).

       Any  salt  function, including ones defined in the _module folder of your salt repo, could
       be used here and its output will be displayed under the ‘extra’ heading of the email.

       Supported values for report_format are html, json, and yaml. The latter two are  typically
       used  for  debugging  purposes,  but  could  be used for applying a template at some later
       stage.

       The values for report_delivery are smtp or file. In case of file delivery the  only  other
       applicable option is file_output.

       In  case  of smtp delivery, smtp_* options demonstrated by the example above could be used
       to customize the email.

       As you might have noticed, the success  and  failure  subjects  contain  {id}  and  {host}
       values.  Any  other  grain name could be used. As opposed to using {{grains[‘id’]}}, which
       will be rendered by the  master  and  contain  master’s  values  at  the  time  of  pillar
       generation, these will contain minion values at the time of execution.

       salt.returners.highstate_return.returner(ret)
              Check highstate return information and possibly fire off an email or save a file.

   salt.returners.hipchat_return
       Return salt data via hipchat.

       New in version 2015.5.0.

       The following fields can be set in the minion conf file:

          hipchat.room_id (required)
          hipchat.api_key (required)
          hipchat.api_version (required)
          hipchat.api_url (optional)
          hipchat.from_name (required)
          hipchat.color (optional)
          hipchat.notify (optional)
          hipchat.profile (optional)
          hipchat.url (optional)

       NOTE:
          When  using Hipchat’s API v2, api_key needs to be assigned to the room with the “Label”
          set to what you would have  been  set  in  the  hipchat.from_name  field.  The  v2  API
          disregards  the from_name in the data sent for the room notification and uses the Label
          assigned through the Hipchat control panel.

       Alternative configuration values can be used by prefacing the configuration.   Any  values
       not found in the alternative configuration will be pulled from the default location:

          hipchat.room_id
          hipchat.api_key
          hipchat.api_version
          hipchat.api_url
          hipchat.from_name

       Hipchat settings may also be configured as:

          hipchat:
            room_id: RoomName
            api_url: https://hipchat.myteam.con
            api_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
            api_version: v1
            from_name: user@email.com

          alternative.hipchat:
            room_id: RoomName
            api_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
            api_version: v1
            from_name: user@email.com

          hipchat_profile:
            hipchat.api_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
            hipchat.api_version: v1
            hipchat.from_name: user@email.com

          hipchat:
            profile: hipchat_profile
            room_id: RoomName

          alternative.hipchat:
            profile: hipchat_profile
            room_id: RoomName

          hipchat:
            room_id: RoomName
            api_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
            api_version: v1
            api_url: api.hipchat.com
            from_name: user@email.com

       To use the HipChat returner, append ‘–return hipchat’ to the salt command.

          salt '*' test.ping --return hipchat

       To  use  the  alternative  configuration,  append ‘–return_config alternative’ to the salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return hipchat --return_config alternative

       To override individual configuration items, append –return_kwargs ‘{“key:”:  “value”}’  to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return hipchat --return_kwargs '{"room_id": "another-room"}'

       salt.returners.hipchat_return.event_return(events)
              Return event data to hipchat

       salt.returners.hipchat_return.returner(ret)
              Send an hipchat message with the return data from a job

   salt.returners.influxdb_return
       Return data to an influxdb server.

       New in version 2015.8.0.

       To  enable this returner the minion will need the python client for influxdb installed and
       the following values configured in the minion or master config, these are the defaults:

          influxdb.db: 'salt'
          influxdb.user: 'salt'
          influxdb.password: 'salt'
          influxdb.host: 'localhost'
          influxdb.port: 8086

       Alternative configuration values can be used by prefacing the configuration.   Any  values
       not found in the alternative configuration will be pulled from the default location:

          alternative.influxdb.db: 'salt'
          alternative.influxdb.user: 'salt'
          alternative.influxdb.password: 'salt'
          alternative.influxdb.host: 'localhost'
          alternative.influxdb.port: 6379

       To use the influxdb returner, append ‘–return influxdb’ to the salt command.

          salt '*' test.ping --return influxdb

       To  use  the  alternative  configuration,  append ‘–return_config alternative’ to the salt
       command.

          salt '*' test.ping --return influxdb --return_config alternative

       To override individual configuration items, append –return_kwargs ‘{“key:”:  “value”}’  to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return influxdb --return_kwargs '{"db": "another-salt"}'

       salt.returners.influxdb_return.get_fun(fun)
              Return a dict of the last function called for all minions

       salt.returners.influxdb_return.get_jid(jid)
              Return the information returned when the specified job id was executed

       salt.returners.influxdb_return.get_jids()
              Return a list of all job ids

       salt.returners.influxdb_return.get_load(jid)
              Return the load data that marks a specified jid

       salt.returners.influxdb_return.get_minions()
              Return a list of minions

       salt.returners.influxdb_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.influxdb_return.returner(ret)
              Return data to a influxdb data store

       salt.returners.influxdb_return.save_load(jid, load, minions=None)
              Save the load to the specified jid

   salt.returners.kafka_return
       Return data to a Kafka topic

       maintainer
              Christer Edwards (christer.edwards@gmail.com)

       maturity
              0.1

       depends
              kafka-python

       platform
              all

       To  enable  this  returner  install  kafka-python and enable the following settings in the
       minion config:

          returner.kafka.hostnames:

                 · “server1”

                 · “server2”

                 · “server3”

          returner.kafka.topic: ‘topic’

       To use the kafka returner, append ‘–return kafka’ to the Salt command, eg;
          salt ‘*’ test.ping –return kafka

       salt.returners.kafka_return.returner(ret)
              Return information to a Kafka server

   salt.returners.librato_return
       Salt returner to return highstate stats to Librato

       To enable this returner the minion will need the Librato client importable on  the  Python
       path and the following values configured in the minion or master config.

       The Librato python client can be found at: https://github.com/librato/python-librato

          librato.email: example@librato.com
          librato.api_token: abc12345def

       This  return  supports  multi-dimension  metrics  for  Librato. To enable support for more
       metrics, the tags JSON object can be modified to include other tags.

       Adding  EC2  Tags  example:  If  ec2_tags:region  were  desired  within   the   tags   for
       multi-dimension.  The  tags could be modified to include the ec2 tags. Multiple dimensions
       are added simply by adding more tags to the submission.

          pillar_data = __salt__['pillar.raw']()
          q.add(metric.name, value, tags={'Name': ret['id'],'Region': pillar_data['ec2_tags']['Name']})

       salt.returners.librato_return.returner(ret)
              Parse the return data and return metrics to Librato.

   salt.returners.local
       The local returner is used to test the returner interface, it just prints the return  data
       to the console to verify that it is being passed properly

       To use the local returner, append ‘–return local’ to the salt command. ex:

          salt '*' test.ping --return local

       salt.returners.local.event_return(event)
              Print event return data to the terminal to verify functionality

       salt.returners.local.returner(ret)
              Print the return data to the terminal to verify functionality

   salt.returners.local_cache
       Return data to local job cache

       salt.returners.local_cache.clean_old_jobs()
              Clean out the old jobs from the job cache

       salt.returners.local_cache.get_endtime(jid)
              Retrieve the stored endtime for a given job

              Returns False if no endtime is present

       salt.returners.local_cache.get_jid(jid)
              Return the information returned when the specified job id was executed

       salt.returners.local_cache.get_jids()
              Return a dict mapping all job ids to job information

       salt.returners.local_cache.get_jids_filter(count, filter_find_job=True)
              Return  a  list of all jobs information filtered by the given criteria.  :param int
              count:  show  not  more  than  the  count  of  most   recent   jobs   :param   bool
              filter_find_jobs: filter out ‘saltutil.find_job’ jobs

       salt.returners.local_cache.get_load(jid)
              Return the load data that marks a specified jid

       salt.returners.local_cache.load_reg()
              Load the register from msgpack files

       salt.returners.local_cache.prep_jid(nocache=False, passed_jid=None, recurse_count=0)
              Return a job id and prepare the job id directory.

              This  is  the function responsible for making sure jids don’t collide (unless it is
              passed a jid).  So do what you have to do to make sure that stays the case

       salt.returners.local_cache.returner(load)
              Return data to the local job cache

       salt.returners.local_cache.save_load(jid, clear_load, minions=None, recurse_count=0)
              Save the load to the specified jid

              minions argument is to provide a pre-computed list of matched minions for the  job,
              for cases when this function can’t compute that list itself (such as for salt-ssh)

       salt.returners.local_cache.save_minions(jid, minions, syndic_id=None)
              Save/update the serialized list of minions for a given job

       salt.returners.local_cache.save_reg(data)
              Save the register to msgpack files

       salt.returners.local_cache.update_endtime(jid, time)
              Update (or store) the end time for a given job

              Endtime is stored as a plain text string

   salt.returners.mattermost_returner module
       Return salt data via mattermost

       New in version 2017.7.0.

       The following fields can be set in the minion conf file:

          mattermost.hook (required)
          mattermost.username (optional)
          mattermost.channel (optional)

       Alternative  configuration  values can be used by prefacing the configuration.  Any values
       not found in the alternative configuration will be pulled from the default location:

          mattermost.channel
          mattermost.hook
          mattermost.username

       mattermost settings may also be configured as:

          mattermost:
            channel: RoomName
            hook: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
            username: user

       To use the mattermost returner, append ‘–return mattermost’ to the salt command.

          salt '*' test.ping --return mattermost

       To override individual configuration items, append –return_kwargs ‘{‘key:’:  ‘value’}’  to
       the salt command.

          salt '*' test.ping --return mattermost --return_kwargs '{'channel': '#random'}'

       salt.returners.mattermost_returner.event_return(events)
              Send the events to a mattermost room.

              Parameters
                     events – List of events

              Returns
                     Boolean if messages were sent successfully.

       salt.returners.mattermost_returner.post_message(channel, message, username, api_url, hook)
              Send a message to a mattermost room.

              Parameters

                     · channel – The room name.

                     · message – The message to send to the mattermost room.

                     · username – Specify who the message is from.

                     · hook – The mattermost hook, if not specified in the configuration.

              Returns
                     Boolean if message was sent successfully.

       salt.returners.mattermost_returner.returner(ret)
              Send an mattermost message with the data

   salt.returners.memcache_return
       Return data to a memcache server

       To  enable this returner the minion will need the python client for memcache installed and
       the following values configured in the minion or master config, these are the defaults.

          memcache.host: 'localhost'
          memcache.port: '11211'

       Alternative configuration values can be used by prefacing the configuration.   Any  values
       not found in the alternative configuration will be pulled from the default location.

          alternative.memcache.host: 'localhost'
          alternative.memcache.port: '11211'

       python2-memcache uses ‘localhost’ and ‘11211’ as syntax on connection.

       To use the memcache returner, append ‘–return memcache’ to the salt command.

          salt '*' test.ping --return memcache

       To  use  the  alternative  configuration,  append ‘–return_config alternative’ to the salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return memcache --return_config alternative

       To override individual configuration items, append –return_kwargs ‘{“key:”:  “value”}’  to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return memcache --return_kwargs '{"host": "hostname.domain.com"}'

       salt.returners.memcache_return.get_fun(fun)
              Return a dict of the last function called for all minions

       salt.returners.memcache_return.get_jid(jid)
              Return the information returned when the specified job id was executed

       salt.returners.memcache_return.get_jids()
              Return a list of all job ids

       salt.returners.memcache_return.get_load(jid)
              Return the load data that marks a specified jid

       salt.returners.memcache_return.get_minions()
              Return a list of minions

       salt.returners.memcache_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.memcache_return.returner(ret)
              Return data to a memcache data store

       salt.returners.memcache_return.save_load(jid, load, minions=None)
              Save the load to the specified jid

   salt.returners.mongo_future_return
       Return data to a mongodb server

       Required python modules: pymongo

       This  returner  will  send  data  from  the  minions to a MongoDB server. To configure the
       settings for your MongoDB server, add the following lines to the minion config files:

          mongo.db: <database name>
          mongo.host: <server ip address>
          mongo.user: <MongoDB username>
          mongo.password: <MongoDB user password>
          mongo.port: 27017

       You can also ask for indexes creation on the most common used fields, which should greatly
       improve performance. Indexes are not created by default.

          mongo.indexes: true

       Alternative  configuration  values can be used by prefacing the configuration.  Any values
       not found in the alternative configuration will be pulled from the default location:

          alternative.mongo.db: <database name>
          alternative.mongo.host: <server ip address>
          alternative.mongo.user: <MongoDB username>
          alternative.mongo.password: <MongoDB user password>
          alternative.mongo.port: 27017

       This mongo returner is being developed to replace the  default  mongodb  returner  in  the
       future and should not be considered API stable yet.

       To use the mongo returner, append ‘–return mongo’ to the salt command.

          salt '*' test.ping --return mongo

       To  use  the  alternative  configuration,  append ‘–return_config alternative’ to the salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return mongo --return_config alternative

       To override individual configuration items, append –return_kwargs ‘{“key:”:  “value”}’  to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return mongo --return_kwargs '{"db": "another-salt"}'

       salt.returners.mongo_future_return.event_return(events)
              Return events to Mongodb server

       salt.returners.mongo_future_return.get_fun(fun)
              Return the most recent jobs that have executed the named function

       salt.returners.mongo_future_return.get_jid(jid)
              Return the return information associated with a jid

       salt.returners.mongo_future_return.get_jids()
              Return a list of job ids

       salt.returners.mongo_future_return.get_load(jid)
              Return the load associated with a given job id

       salt.returners.mongo_future_return.get_minions()
              Return a list of minions

       salt.returners.mongo_future_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.mongo_future_return.returner(ret)
              Return data to a mongodb server

       salt.returners.mongo_future_return.save_load(jid, load, minions=None)
              Save the load for a given job id

   salt.returners.mongo_return
       Return data to a mongodb server

       Required python modules: pymongo

       This  returner  will  send  data  from  the  minions to a MongoDB server. To configure the
       settings for your MongoDB server, add the following lines to the minion config files.

          mongo.db: <database name>
          mongo.host: <server ip address>
          mongo.user: <MongoDB username>
          mongo.password: <MongoDB user password>
          mongo.port: 27017

       Alternative configuration values can be used by prefacing the configuration.   Any  values
       not found in the alternative configuration will be pulled from the default location.

          alternative.mongo.db: <database name>
          alternative.mongo.host: <server ip address>
          alternative.mongo.user: <MongoDB username>
          alternative.mongo.password: <MongoDB user password>
          alternative.mongo.port: 27017

       To use the mongo returner, append ‘–return mongo’ to the salt command.

          salt '*' test.ping --return mongo_return

       To  use  the  alternative  configuration,  append ‘–return_config alternative’ to the salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return mongo_return --return_config alternative

       To override individual configuration items, append –return_kwargs ‘{“key:”:  “value”}’  to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return mongo --return_kwargs '{"db": "another-salt"}'

       To  override  individual configuration items, append –return_kwargs ‘{“key:”: “value”}’ to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return mongo --return_kwargs '{"db": "another-salt"}'

       salt.returners.mongo_return.get_fun(fun)
              Return the most recent jobs that have executed the named function

       salt.returners.mongo_return.get_jid(jid)
              Return the return information associated with a jid

       salt.returners.mongo_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.mongo_return.returner(ret)
              Return data to a mongodb server

   salt.returners.multi_returner
       Read/Write multiple returners

       salt.returners.multi_returner.clean_old_jobs()
              Clean out the old jobs from all returners (if you have it)

       salt.returners.multi_returner.get_jid(jid)
              Merge the return data from all returners

       salt.returners.multi_returner.get_jids()
              Return all job data from all returners

       salt.returners.multi_returner.get_load(jid)
              Merge the load data from all returners

       salt.returners.multi_returner.prep_jid(nocache=False, passed_jid=None)
              Call both with prep_jid on all returners in multi_returner

              TODO: finish this, what do do when you get different jids from 2  returners…  since
              our  jids  are time based, this make this problem hard, because they aren’t unique,
              meaning that we have to make sure that no one else got the jid and if they  did  we
              spin to get a new one, which means “locking” the jid in 2 returners is non-trivial

       salt.returners.multi_returner.returner(load)
              Write return to all returners in multi_returner

       salt.returners.multi_returner.save_load(jid, clear_load, minions=None)
              Write load to all returners in multi_returner

   salt.returners.mysql
       Return data to a mysql server

       maintainer
              Dave Boucha <dave@saltstack.com>, Seth House <shouse@saltstack.com>

       maturity
              mature

       depends
              python-mysqldb

       platform
              all

       To  enable  this  returner, the minion will need the python client for mysql installed and
       the following values configured in the minion or master config. These are the defaults:

          mysql.host: 'salt'
          mysql.user: 'salt'
          mysql.pass: 'salt'
          mysql.db: 'salt'
          mysql.port: 3306

       SSL is optional. The defaults are set to None. If you do  not  want  to  use  SSL,  either
       exclude these options or set them to None.

          mysql.ssl_ca: None
          mysql.ssl_cert: None
          mysql.ssl_key: None

       Alternative  configuration  values  can  be  used  by  prefacing  the  configuration  with
       alternative.. Any values not found in the alternative configuration will  be  pulled  from
       the  default  location.  As stated above, SSL configuration is optional. The following ssl
       options are simply for illustration purposes:

          alternative.mysql.host: 'salt'
          alternative.mysql.user: 'salt'
          alternative.mysql.pass: 'salt'
          alternative.mysql.db: 'salt'
          alternative.mysql.port: 3306
          alternative.mysql.ssl_ca: '/etc/pki/mysql/certs/localhost.pem'
          alternative.mysql.ssl_cert: '/etc/pki/mysql/certs/localhost.crt'
          alternative.mysql.ssl_key: '/etc/pki/mysql/certs/localhost.key'

       Should you wish the returner data to be cleaned out every so often, set keep_jobs  to  the
       number  of  hours for the jobs to live in the tables.  Setting it to 0 or leaving it unset
       will cause the data to stay in the tables.

       Should you  wish  to  archive  jobs  in  a  different  table  for  later  processing,  set
       archive_jobs to True.  Salt will create 3 archive tables

       · jids_archive

       · salt_returns_archive

       · salt_events_archive

       and  move the contents of jids, salt_returns, and salt_events that are more than keep_jobs
       hours old to these tables.

       Use the following mysql database schema:

          CREATE DATABASE  `salt`
            DEFAULT CHARACTER SET utf8
            DEFAULT COLLATE utf8_general_ci;

          USE `salt`;

          --
          -- Table structure for table `jids`
          --

          DROP TABLE IF EXISTS `jids`;
          CREATE TABLE `jids` (
            `jid` varchar(255) NOT NULL,
            `load` mediumtext NOT NULL,
            UNIQUE KEY `jid` (`jid`)
          ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
          CREATE INDEX jid ON jids(jid) USING BTREE;

          --
          -- Table structure for table `salt_returns`
          --

          DROP TABLE IF EXISTS `salt_returns`;
          CREATE TABLE `salt_returns` (
            `fun` varchar(50) NOT NULL,
            `jid` varchar(255) NOT NULL,
            `return` mediumtext NOT NULL,
            `id` varchar(255) NOT NULL,
            `success` varchar(10) NOT NULL,
            `full_ret` mediumtext NOT NULL,
            `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
            KEY `id` (`id`),
            KEY `jid` (`jid`),
            KEY `fun` (`fun`)
          ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

          --
          -- Table structure for table `salt_events`
          --

          DROP TABLE IF EXISTS `salt_events`;
          CREATE TABLE `salt_events` (
          `id` BIGINT NOT NULL AUTO_INCREMENT,
          `tag` varchar(255) NOT NULL,
          `data` mediumtext NOT NULL,
          `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
          `master_id` varchar(255) NOT NULL,
          PRIMARY KEY (`id`),
          KEY `tag` (`tag`)
          ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

       Required python modules: MySQLdb

       To use the mysql returner, append ‘–return mysql’ to the salt command.

          salt '*' test.ping --return mysql

       To use the alternative configuration, append  ‘–return_config  alternative’  to  the  salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return mysql --return_config alternative

       To  override  individual configuration items, append –return_kwargs ‘{“key:”: “value”}’ to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return mysql --return_kwargs '{"db": "another-salt"}'

       salt.returners.mysql.clean_old_jobs()
              Called in the master’s event loop every loop_interval.  Archives and/or deletes the
              events and job details from the database.  :return:

       salt.returners.mysql.event_return(events)
              Return event to mysql server

              Requires that configuration be enabled via ‘event_return’ option in master config.

       salt.returners.mysql.get_fun(fun)
              Return a dict of the last function called for all minions

       salt.returners.mysql.get_jid(jid)
              Return the information returned when the specified job id was executed

       salt.returners.mysql.get_jids()
              Return a list of all job ids

       salt.returners.mysql.get_jids_filter(count, filter_find_job=True)
              Return a list of all job ids :param int count: show not more than the count of most
              recent jobs :param bool filter_find_jobs: filter out ‘saltutil.find_job’ jobs

       salt.returners.mysql.get_load(jid)
              Return the load data that marks a specified jid

       salt.returners.mysql.get_minions()
              Return a list of minions

       salt.returners.mysql.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.mysql.returner(ret)
              Return data to a mysql server

       salt.returners.mysql.save_load(jid, load, minions=None)
              Save the load to the specified jid id

   salt.returners.nagios_return
       Return salt data to Nagios

       The following fields can be set in the minion conf file:

          nagios.url (required)
          nagios.token (required)
          nagios.service (optional)
          nagios.check_type (optional)

       Alternative configuration values can be used by prefacing the configuration.   Any  values
       not found in the alternative configuration will be pulled from the default location:

          nagios.url
          nagios.token
          nagios.service

       Nagios settings may also be configured as:

            nagios:
                url: http://localhost/nrdp
                token: r4nd0mt0k3n
                service: service-check

            alternative.nagios:
                url: http://localhost/nrdp
                token: r4nd0mt0k3n
                service: another-service-check

          To use the Nagios returner, append '--return nagios' to the salt command. ex:

          .. code-block:: bash

            salt '*' test.ping --return nagios

          To use the alternative configuration, append '--return_config alternative' to the salt command. ex:

            salt '*' test.ping --return nagios --return_config alternative

       To  override  individual configuration items, append –return_kwargs ‘{“key:”: “value”}’ to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return nagios --return_kwargs '{"service": "service-name"}'

       salt.returners.nagios_return.returner(ret)
              Send a message to Nagios with the data

   salt.returners.odbc
       Return data to an ODBC compliant server.  This driver was  developed  with  Microsoft  SQL
       Server  in  mind,  but  theoretically  could  be used to return data to any compliant ODBC
       database as long as there is a working ODBC driver for it on your minion platform.

       maintainer

              C.

                 R. Oldham (cr@saltstack.com)

       maturity
              New

       depends
              unixodbc, pyodbc, freetds (for SQL Server)

       platform
              all

       To enable this returner the minion will need

       On Linux:
          unixodbc (http://www.unixodbc.org) pyodbc (pip install pyodbc) The FreeTDS ODBC  driver
          for SQL Server (http://www.freetds.org) or another compatible ODBC driver

       On Windows:
          TBD

       unixODBC and FreeTDS need to be configured via /etc/odbcinst.ini and /etc/odbc.ini.

       /etc/odbcinst.ini:

          [TDS]
          Description=TDS
          Driver=/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so

       (Note  the above Driver line needs to point to the location of the FreeTDS shared library.
       This example is for Ubuntu 14.04.)

       /etc/odbc.ini:

          [TS]
          Description = "Salt Returner"
          Driver=TDS
          Server = <your server ip or fqdn>
          Port = 1433
          Database = salt
          Trace = No

       Also you need the following values configured in the minion or master  config.   Configure
       as you see fit:

          returner.odbc.dsn: 'TS'
          returner.odbc.user: 'salt'
          returner.odbc.passwd: 'salt'

       Alternative  configuration  values can be used by prefacing the configuration.  Any values
       not found in the alternative configuration will be pulled from the default location:

          alternative.returner.odbc.dsn: 'TS'
          alternative.returner.odbc.user: 'salt'
          alternative.returner.odbc.passwd: 'salt'

       Running the following commands against Microsoft SQL Server in the desired database as the
       appropriate user should create the database tables correctly.  Replace with equivalent SQL
       for other ODBC-compliant servers

            --
            -- Table structure for table 'jids'
            --

            if OBJECT_ID('dbo.jids', 'U') is not null
                DROP TABLE dbo.jids

            CREATE TABLE dbo.jids (
               jid   varchar(255) PRIMARY KEY,
               load  varchar(MAX) NOT NULL
             );

            --
            -- Table structure for table 'salt_returns'
            --
            IF OBJECT_ID('dbo.salt_returns', 'U') IS NOT NULL
                DROP TABLE dbo.salt_returns;

            CREATE TABLE dbo.salt_returns (
               added     datetime not null default (getdate()),
               fun       varchar(100) NOT NULL,
               jid       varchar(255) NOT NULL,
               retval    varchar(MAX) NOT NULL,
               id        varchar(255) NOT NULL,
               success   bit default(0) NOT NULL,
               full_ret  varchar(MAX)
             );

            CREATE INDEX salt_returns_added on dbo.salt_returns(added);
            CREATE INDEX salt_returns_id on dbo.salt_returns(id);
            CREATE INDEX salt_returns_jid on dbo.salt_returns(jid);
            CREATE INDEX salt_returns_fun on dbo.salt_returns(fun);

          To use this returner, append '--return odbc' to the salt command.

          .. code-block:: bash

            salt '*' status.diskusage --return odbc

          To use the alternative configuration, append '--return_config alternative' to the salt command.

          .. versionadded:: 2015.5.0

          .. code-block:: bash

            salt '*' test.ping --return odbc --return_config alternative

       To override individual configuration items, append –return_kwargs ‘{“key:”:  “value”}’  to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return odbc --return_kwargs '{"dsn": "dsn-name"}'

       salt.returners.odbc.get_fun(fun)
              Return a dict of the last function called for all minions

       salt.returners.odbc.get_jid(jid)
              Return the information returned when the specified job id was executed

       salt.returners.odbc.get_jids()
              Return a list of all job ids

       salt.returners.odbc.get_load(jid)
              Return the load data that marks a specified jid

       salt.returners.odbc.get_minions()
              Return a list of minions

       salt.returners.odbc.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.odbc.returner(ret)
              Return data to an odbc server

       salt.returners.odbc.save_load(jid, load, minions=None)
              Save the load to the specified jid id

   salt.returners.pgjsonb
       Return data to a PostgreSQL server with json data stored in Pg’s jsonb data type

       maintainer
              Dave  Boucha  <dave@saltstack.com>, Seth House <shouse@saltstack.com>, C. R. Oldham
              <cr@saltstack.com>

       maturity
              Stable

       depends
              python-psycopg2

       platform
              all

       NOTE:
          There are three PostgreSQL returners.  Any can  function  as  an  external  master  job
          cache.  but each has different features.  SaltStack recommends returners.pgjsonb if you
          are working with a version of PostgreSQL that has the appropriate  native  binary  JSON
          types.   Otherwise, review returners.postgres and returners.postgres_local_cache to see
          which module best suits your particular needs.

       To enable this returner, the minion will need the python client for  PostgreSQL  installed
       and  the  following  values  configured  in  the  minion  or  master config. These are the
       defaults:

          returner.pgjsonb.host: 'salt'
          returner.pgjsonb.user: 'salt'
          returner.pgjsonb.pass: 'salt'
          returner.pgjsonb.db: 'salt'
          returner.pgjsonb.port: 5432

       SSL is optional. The defaults are set to None. If you do  not  want  to  use  SSL,  either
       exclude these options or set them to None.

          returner.pgjsonb.sslmode: None
          returner.pgjsonb.sslcert: None
          returner.pgjsonb.sslkey: None
          returner.pgjsonb.sslrootcert: None
          returner.pgjsonb.sslcrl: None

       New in version 2017.5.0.

       Alternative  configuration  values  can  be  used  by  prefacing  the  configuration  with
       alternative.. Any values not found in the alternative configuration will  be  pulled  from
       the  default  location.  As stated above, SSL configuration is optional. The following ssl
       options are simply for illustration purposes:

          alternative.pgjsonb.host: 'salt'
          alternative.pgjsonb.user: 'salt'
          alternative.pgjsonb.pass: 'salt'
          alternative.pgjsonb.db: 'salt'
          alternative.pgjsonb.port: 5432
          alternative.pgjsonb.ssl_ca: '/etc/pki/mysql/certs/localhost.pem'
          alternative.pgjsonb.ssl_cert: '/etc/pki/mysql/certs/localhost.crt'
          alternative.pgjsonb.ssl_key: '/etc/pki/mysql/certs/localhost.key'

       Use the following Pg database schema:

          CREATE DATABASE  salt
            WITH ENCODING 'utf-8';

          --
          -- Table structure for table `jids`
          --
          DROP TABLE IF EXISTS jids;
          CREATE TABLE jids (
             jid varchar(255) NOT NULL primary key,
             load jsonb NOT NULL
          );
          CREATE INDEX idx_jids_jsonb on jids
                 USING gin (load)
                 WITH (fastupdate=on);

          --
          -- Table structure for table `salt_returns`
          --

          DROP TABLE IF EXISTS salt_returns;
          CREATE TABLE salt_returns (
            fun varchar(50) NOT NULL,
            jid varchar(255) NOT NULL,
            return jsonb NOT NULL,
            id varchar(255) NOT NULL,
            success varchar(10) NOT NULL,
            full_ret jsonb NOT NULL,
            alter_time TIMESTAMP WITH TIME ZONE DEFAULT NOW());

          CREATE INDEX idx_salt_returns_id ON salt_returns (id);
          CREATE INDEX idx_salt_returns_jid ON salt_returns (jid);
          CREATE INDEX idx_salt_returns_fun ON salt_returns (fun);
          CREATE INDEX idx_salt_returns_return ON salt_returns
              USING gin (return) with (fastupdate=on);
          CREATE INDEX idx_salt_returns_full_ret ON salt_returns
              USING gin (full_ret) with (fastupdate=on);

          --
          -- Table structure for table `salt_events`
          --

          DROP TABLE IF EXISTS salt_events;
          DROP SEQUENCE IF EXISTS seq_salt_events_id;
          CREATE SEQUENCE seq_salt_events_id;
          CREATE TABLE salt_events (
              id BIGINT NOT NULL UNIQUE DEFAULT nextval('seq_salt_events_id'),
              tag varchar(255) NOT NULL,
              data jsonb NOT NULL,
              alter_time TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
              master_id varchar(255) NOT NULL);

          CREATE INDEX idx_salt_events_tag on
              salt_events (tag);
          CREATE INDEX idx_salt_events_data ON salt_events
              USING gin (data) with (fastupdate=on);

       Required python modules: Psycopg2

       To use this returner, append ‘–return pgjsonb’ to the salt command.

          salt '*' test.ping --return pgjsonb

       To use the alternative configuration, append  ‘–return_config  alternative’  to  the  salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return pgjsonb --return_config alternative

       To  override  individual configuration items, append –return_kwargs ‘{“key:”: “value”}’ to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return pgjsonb --return_kwargs '{"db": "another-salt"}'

       salt.returners.pgjsonb.event_return(events)
              Return event to Pg server

              Requires that configuration be enabled via ‘event_return’ option in master config.

       salt.returners.pgjsonb.get_fun(fun)
              Return a dict of the last function called for all minions

       salt.returners.pgjsonb.get_jid(jid)
              Return the information returned when the specified job id was executed

       salt.returners.pgjsonb.get_jids()
              Return a list of all job ids

       salt.returners.pgjsonb.get_load(jid)
              Return the load data that marks a specified jid

       salt.returners.pgjsonb.get_minions()
              Return a list of minions

       salt.returners.pgjsonb.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.pgjsonb.returner(ret)
              Return data to a Pg server

       salt.returners.pgjsonb.save_load(jid, load, minions=None)
              Save the load to the specified jid id

   salt.returners.postgres
       Return data to a postgresql server

       NOTE:
          There are three PostgreSQL returners.  Any can  function  as  an  external  master  job
          cache.  but each has different features.  SaltStack recommends returners.pgjsonb if you
          are working with a version of PostgreSQL that has the appropriate  native  binary  JSON
          types.   Otherwise, review returners.postgres and returners.postgres_local_cache to see
          which module best suits your particular needs.

       maintainer
              None

       maturity
              New

       depends
              psycopg2

       platform
              all

       To enable this returner the minion will need the  psycopg2  installed  and  the  following
       values configured in the minion or master config:

          returner.postgres.host: 'salt'
          returner.postgres.user: 'salt'
          returner.postgres.passwd: 'salt'
          returner.postgres.db: 'salt'
          returner.postgres.port: 5432

       Alternative  configuration  values can be used by prefacing the configuration.  Any values
       not found in the alternative configuration will be pulled from the default location:

          alternative.returner.postgres.host: 'salt'
          alternative.returner.postgres.user: 'salt'
          alternative.returner.postgres.passwd: 'salt'
          alternative.returner.postgres.db: 'salt'
          alternative.returner.postgres.port: 5432

       Running the following commands as the postgres user should create the database correctly:

          psql << EOF
          CREATE ROLE salt WITH PASSWORD 'salt';
          CREATE DATABASE salt WITH OWNER salt;
          EOF

          psql -h localhost -U salt << EOF
          --
          -- Table structure for table 'jids'
          --

          DROP TABLE IF EXISTS jids;
          CREATE TABLE jids (
            jid   varchar(20) PRIMARY KEY,
            load  text NOT NULL
          );

          --
          -- Table structure for table 'salt_returns'
          --

          DROP TABLE IF EXISTS salt_returns;
          CREATE TABLE salt_returns (
            fun       varchar(50) NOT NULL,
            jid       varchar(255) NOT NULL,
            return    text NOT NULL,
            full_ret  text,
            id        varchar(255) NOT NULL,
            success   varchar(10) NOT NULL,
            alter_time   TIMESTAMP WITH TIME ZONE DEFAULT now()
          );

          CREATE INDEX idx_salt_returns_id ON salt_returns (id);
          CREATE INDEX idx_salt_returns_jid ON salt_returns (jid);
          CREATE INDEX idx_salt_returns_fun ON salt_returns (fun);
          CREATE INDEX idx_salt_returns_updated ON salt_returns (alter_time);

          --
          -- Table structure for table `salt_events`
          --

          DROP TABLE IF EXISTS salt_events;
          DROP SEQUENCE IF EXISTS seq_salt_events_id;
          CREATE SEQUENCE seq_salt_events_id;
          CREATE TABLE salt_events (
              id BIGINT NOT NULL UNIQUE DEFAULT nextval('seq_salt_events_id'),
              tag varchar(255) NOT NULL,
              data text NOT NULL,
              alter_time TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
              master_id varchar(255) NOT NULL
          );

          CREATE INDEX idx_salt_events_tag on salt_events (tag);

          EOF

       Required python modules: psycopg2

       To use the postgres returner, append ‘–return postgres’ to the salt command.

          salt '*' test.ping --return postgres

       To use the alternative configuration, append  ‘–return_config  alternative’  to  the  salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return postgres --return_config alternative

       To  override  individual configuration items, append –return_kwargs ‘{“key:”: “value”}’ to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return postgres --return_kwargs '{"db": "another-salt"}'

       salt.returners.postgres.event_return(events)
              Return event to Pg server

              Requires that configuration be enabled via ‘event_return’ option in master config.

       salt.returners.postgres.get_fun(fun)
              Return a dict of the last function called for all minions

       salt.returners.postgres.get_jid(jid)
              Return the information returned when the specified job id was executed

       salt.returners.postgres.get_jids()
              Return a list of all job ids

       salt.returners.postgres.get_load(jid)
              Return the load data that marks a specified jid

       salt.returners.postgres.get_minions()
              Return a list of minions

       salt.returners.postgres.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.postgres.returner(ret)
              Return data to a postgres server

       salt.returners.postgres.save_load(jid, load, minions=None)
              Save the load to the specified jid id

   salt.returners.postgres_local_cache
       Use a postgresql server for the master job cache. This helps the job cache  to  cope  with
       scale.

       NOTE:
          There  are  three  PostgreSQL  returners.   Any  can function as an external master job
          cache. but each has different features.  SaltStack recommends returners.pgjsonb if  you
          are  working  with  a version of PostgreSQL that has the appropriate native binary JSON
          types.  Otherwise, review returners.postgres and returners.postgres_local_cache to  see
          which module best suits your particular needs.

       maintainer
              gjredelinghuys@gmail.com

       maturity
              Stable

       depends
              psycopg2

       platform
              all

       To  enable  this  returner  the  minion will need the psycopg2 installed and the following
       values configured in the master config:

          master_job_cache: postgres_local_cache
          master_job_cache.postgres.host: 'salt'
          master_job_cache.postgres.user: 'salt'
          master_job_cache.postgres.passwd: 'salt'
          master_job_cache.postgres.db: 'salt'
          master_job_cache.postgres.port: 5432

       Running the following command as the postgres user should create the database correctly:

          psql << EOF
          CREATE ROLE salt WITH PASSWORD 'salt';
          CREATE DATABASE salt WITH OWNER salt;
          EOF

       In case the postgres database is a remote host, you’ll need this command also:

          ALTER ROLE salt WITH LOGIN;

       and then:

          psql -h localhost -U salt << EOF
          --
          -- Table structure for table 'jids'
          --

          DROP TABLE IF EXISTS jids;
          CREATE TABLE jids (
            jid   varchar(20) PRIMARY KEY,
            started TIMESTAMP WITH TIME ZONE DEFAULT now(),
            tgt_type text NOT NULL,
            cmd text NOT NULL,
            tgt text NOT NULL,
            kwargs text NOT NULL,
            ret text NOT NULL,
            username text NOT NULL,
            arg text NOT NULL,
            fun text NOT NULL
          );

          --
          -- Table structure for table 'salt_returns'
          --
          -- note that 'success' must not have NOT NULL constraint, since
          -- some functions don't provide it.

          DROP TABLE IF EXISTS salt_returns;
          CREATE TABLE salt_returns (
            added     TIMESTAMP WITH TIME ZONE DEFAULT now(),
            fun       text NOT NULL,
            jid       varchar(20) NOT NULL,
            return    text NOT NULL,
            id        text NOT NULL,
            success   boolean
          );
          CREATE INDEX ON salt_returns (added);
          CREATE INDEX ON salt_returns (id);
          CREATE INDEX ON salt_returns (jid);
          CREATE INDEX ON salt_returns (fun);

          DROP TABLE IF EXISTS salt_events;
          CREATE TABLE salt_events (
            id SERIAL,
            tag text NOT NULL,
            data text NOT NULL,
            alter_time TIMESTAMP WITH TIME ZONE DEFAULT now(),
            master_id text NOT NULL
          );
          CREATE INDEX ON salt_events (tag);
          CREATE INDEX ON salt_events (data);
          CREATE INDEX ON salt_events (id);
          CREATE INDEX ON salt_events (master_id);
          EOF

       Required python modules: psycopg2

       salt.returners.postgres_local_cache.clean_old_jobs()
              Clean out the old jobs from the job cache

       salt.returners.postgres_local_cache.event_return(events)
              Return event to a postgres server

              Require that configuration be enabled via ‘event_return’ option in master config.

       salt.returners.postgres_local_cache.get_jid(jid)
              Return the information returned when the specified job id was executed

       salt.returners.postgres_local_cache.get_jids()
              Return a list of all job ids For master job cache this also formats the output  and
              returns a string

       salt.returners.postgres_local_cache.get_load(jid)
              Return the load data that marks a specified jid

       salt.returners.postgres_local_cache.prep_jid(nocache=False, passed_jid=None)
              Return  a  job id and prepare the job id directory This is the function responsible
              for making sure jids don’t collide (unless its passed a jid). So do what  you  have
              to do to make sure that stays the case

       salt.returners.postgres_local_cache.returner(load)
              Return data to a postgres server

       salt.returners.postgres_local_cache.save_load(jid, clear_load, minions=None)
              Save the load to the specified jid id

   salt.returners.pushover_returner
       Return salt data via pushover (http://www.pushover.net)

       New in version 2016.3.0.

       The following fields can be set in the minion conf file:

          pushover.user (required)
          pushover.token (required)
          pushover.title (optional)
          pushover.device (optional)
          pushover.priority (optional)
          pushover.expire (optional)
          pushover.retry (optional)
          pushover.profile (optional)

       Alternative  configuration  values can be used by prefacing the configuration.  Any values
       not found in the alternative configuration will be pulled from the default location:

          alternative.pushover.user
          alternative.pushover.token
          alternative.pushover.title
          alternative.pushover.device
          alternative.pushover.priority
          alternative.pushover.expire
          alternative.pushover.retry

       PushOver settings may also be configured as:

            pushover:
                user: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
                token: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
                title: Salt Returner
                device: phone
                priority: -1
                expire: 3600
                retry: 5

            alternative.pushover:
                user: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
                token: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
                title: Salt Returner
                device: phone
                priority: 1
                expire: 4800
                retry: 2

            pushover_profile:
                pushover.token: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

            pushover:
                user: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
                profile: pushover_profile

            alternative.pushover:
                user: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
                profile: pushover_profile

          To use the PushOver returner, append '--return pushover' to the salt command. ex:

          .. code-block:: bash

            salt '*' test.ping --return pushover

          To use the alternative configuration, append '--return_config alternative' to the salt command. ex:

            salt '*' test.ping --return pushover --return_config alternative

       To override individual configuration items, append –return_kwargs ‘{“key:”:  “value”}’  to
       the salt command.

          salt '*' test.ping --return pushover --return_kwargs '{"title": "Salt is awesome!"}'

       salt.returners.pushover_returner.returner(ret)
              Send an PushOver message with the data

   salt.returners.rawfile_json
       Take data from salt and “return” it into a raw file containing the json, with one line per
       event.

       Add the following to the minion or master configuration file.

          rawfile_json.filename: <path_to_output_file>

       Default is /var/log/salt/events.

       Common use is to log all events on the master. This can generate a lot of  noise,  so  you
       may  wish  to  configure  batch  processing and/or configure the event_return_whitelist or
       event_return_blacklist to restrict the events that are written.

       salt.returners.rawfile_json.event_return(events)
              Write event data (return data and non-return data) to file on the master.

       salt.returners.rawfile_json.returner(ret)
              Write the return data to a file on the minion.

   salt.returners.redis_return
       Return data to a redis server

       To enable this returner the minion will need the python client for redis installed and the
       following values configured in the minion or master config, these are the defaults:

          redis.db: '0'
          redis.host: 'salt'
          redis.port: 6379

       New in version 2018.3.1: Alternatively a UNIX socket can be specified by unix_socket_path:

          redis.db: '0'
          redis.unix_socket_path: /var/run/redis/redis.sock

       Cluster Mode Example:

          redis.db: '0'
          redis.cluster_mode: true
          redis.cluster.skip_full_coverage_check: true
          redis.cluster.startup_nodes:
            - host: redis-member-1
              port: 6379
            - host: redis-member-2
              port: 6379

       Alternative  configuration  values can be used by prefacing the configuration.  Any values
       not found in the alternative configuration will be pulled from the default location:

          alternative.redis.db: '0'
          alternative.redis.host: 'salt'
          alternative.redis.port: 6379

       To use the redis returner, append ‘–return redis’ to the salt command.

          salt '*' test.ping --return redis

       To use the alternative configuration, append  ‘–return_config  alternative’  to  the  salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return redis --return_config alternative

       To  override  individual configuration items, append –return_kwargs ‘{“key:”: “value”}’ to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return redis --return_kwargs '{"db": "another-salt"}'

       Redis Cluster Mode Options:

       cluster_mode: False
              Whether cluster_mode is enabled or not

       cluster.startup_nodes:
              A list of host, port dictionaries pointing to cluster  members.  At  least  one  is
              required but multiple nodes are better

                 cache.redis.cluster.startup_nodes
                   - host: redis-member-1
                     port: 6379
                   - host: redis-member-2
                     port: 6379

       cluster.skip_full_coverage_check: False
              Some  cluster providers restrict certain redis commands such as CONFIG for enhanced
              security.   Set  this  option  to  true  to  skip  checks  that  required  advanced
              privileges.

              NOTE:
                 Most cloud hosted redis clusters will require this to be set to True

       salt.returners.redis_return.clean_old_jobs()
              Clean out minions’s return data for old jobs.

              Normally, hset ‘ret:<jid>’ are saved with a TTL, and will eventually get cleaned by
              redis.But for jobs with some very late minion return, the corresponding hset’s  TTL
              will be refreshed to a too late timestamp, we’ll do manually cleaning here.

       salt.returners.redis_return.get_fun(fun)
              Return a dict of the last function called for all minions

       salt.returners.redis_return.get_jid(jid)
              Return the information returned when the specified job id was executed

       salt.returners.redis_return.get_jids()
              Return a dict mapping all job ids to job information

       salt.returners.redis_return.get_load(jid)
              Return the load data that marks a specified jid

       salt.returners.redis_return.get_minions()
              Return a list of minions

       salt.returners.redis_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.redis_return.returner(ret)
              Return data to a redis data store

       salt.returners.redis_return.save_load(jid, load, minions=None)
              Save the load to the specified jid

   salt.returners.sentry_return
       Salt returner that reports execution results back to sentry. The returner will inspect the
       payload to identify errors and flag them as such.

       Pillar needs something like:

          raven:
            servers:
              - http://192.168.1.1
              - https://sentry.example.com
            public_key: deadbeefdeadbeefdeadbeefdeadbeef
            secret_key: beefdeadbeefdeadbeefdeadbeefdead
            project: 1
            tags:
              - os
              - master
              - saltversion
              - cpuarch

       or using a dsn:

          raven:
            dsn: https://aaaa:bbbb@app.getsentry.com/12345
            tags:
              - os
              - master
              - saltversion
              - cpuarch

       https://pypi.python.org/pypi/raven must be installed.

       The pillar can be hidden on sentry return by setting hide_pillar: true.

       The tags list (optional) specifies grains items that will be used as sentry tags, allowing
       tagging of events in the sentry ui.

       To report only errors to sentry, set report_errors_only: true.

       salt.returners.sentry_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.sentry_return.returner(ret)
              Log  outcome  to  sentry.  The returner tries to identify errors and report them as
              such. All other messages will be reported at info level.   Failed  states  will  be
              appended as separate list for convenience.

   salt.returners.slack_returner
       Return salt data via slack

       New in version 2015.5.0.

       The following fields can be set in the minion conf file:

          slack.channel (required)
          slack.api_key (required)
          slack.username (required)
          slack.as_user (required to see the profile picture of your bot)
          slack.profile (optional)
          slack.changes(optional, only show changes and failed states)
          slack.yaml_format(optional, format the json in yaml format)

       Alternative  configuration  values can be used by prefacing the configuration.  Any values
       not found in the alternative configuration will be pulled from the default location:

          slack.channel
          slack.api_key
          slack.username
          slack.as_user

       Slack settings may also be configured as:

          slack:
              channel: RoomName
              api_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
              username: user
              as_user: true

          alternative.slack:
              room_id: RoomName
              api_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
              from_name: user@email.com

          slack_profile:
              slack.api_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
              slack.from_name: user@email.com

          slack:
              profile: slack_profile
              channel: RoomName

          alternative.slack:
              profile: slack_profile
              channel: RoomName

       To use the Slack returner, append ‘–return slack’ to the salt command.

          salt '*' test.ping --return slack

       To use the alternative configuration, append  ‘–return_config  alternative’  to  the  salt
       command.

          salt '*' test.ping --return slack --return_config alternative

       To  override  individual configuration items, append –return_kwargs ‘{“key:”: “value”}’ to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return slack --return_kwargs '{"channel": "#random"}'

       salt.returners.slack_returner.returner(ret)
              Send an slack message with the data

   salt.returners.sms_return
       Return data by SMS.

       New in version 2015.5.0.

       maintainer
              Damian Myerscough

       maturity
              new

       depends
              twilio

       platform
              all

       To enable this returner the minion will need the python twilio library installed  and  the
       following values configured in the minion or master config:

          twilio.sid: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
          twilio.token: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
          twilio.to: '+1415XXXXXXX'
          twilio.from: '+1650XXXXXXX'

       To use the sms returner, append ‘–return sms’ to the salt command.

          salt '*' test.ping --return sms

       salt.returners.sms_return.returner(ret)
              Return a response in an SMS message

   salt.returners.smtp_return
       Return salt data via email

       The  following fields can be set in the minion conf file. Fields are optional unless noted
       otherwise.

       · from (required) The name/address of the email sender.

       ·

         to (required) The names/addresses of the email recipients;
                comma-delimited. For example: you@example.com,someoneelse@example.com.

       · host (required) The SMTP server hostname or address.

       · port The SMTP server port; defaults to 25.

       ·

         username The username used to authenticate to the server. If specified a
                password is also required. It is recommended but not required  to  also  use  TLS
                with this option.

       · password The password used to authenticate to the server.

       · tls Whether to secure the connection using TLS; defaults to False

       · subject The email subject line.

       ·

         fields Which fields from the returned data to include in the subject line
                of the email; comma-delimited. For example: id,fun. Please note, the subject line
                is not encrypted.

       ·

         gpgowner A user’s ~/.gpg directory. This must contain a gpg
                public key matching the address the mail is sent to. If left unset, no encryption
                will be used. Requires python-gnupg to be installed.

       · template The path to a file to be used as a template for the email body.

       ·

         renderer A Salt renderer, or render-pipe, to use to render the email
                template. Default jinja.

       Below is an example of the above settings in a Salt Minion configuration file:

          smtp.from: me@example.net
          smtp.to: you@example.com
          smtp.host: localhost
          smtp.port: 1025

       Alternative  configuration  values can be used by prefacing the configuration.  Any values
       not found in the alternative configuration will be pulled from the default  location.  For
       example:

          alternative.smtp.username: saltdev
          alternative.smtp.password: saltdev
          alternative.smtp.tls: True

       To use the SMTP returner, append ‘–return smtp’ to the salt command.

          salt '*' test.ping --return smtp

       To  use  the  alternative  configuration,  append ‘–return_config alternative’ to the salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return smtp --return_config alternative

       To override individual configuration items, append –return_kwargs ‘{“key:”:  “value”}’  to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return smtp --return_kwargs '{"to": "user@domain.com"}'

       An  easy  way  to  test the SMTP returner is to use the development SMTP server built into
       Python. The command below will start a single-threaded SMTP server that prints  any  email
       it receives to the console.

          python -m smtpd -n -c DebuggingServer localhost:1025

       New in version 2016.11.0.

       It is possible to send emails with selected Salt events by configuring event_return option
       for Salt Master. For example:

          event_return: smtp

          event_return_whitelist:
            - salt/key

          smtp.from: me@example.net
          smtp.to: you@example.com
          smtp.host: localhost
          smtp.subject: 'Salt Master {{act}}ed key from Minion ID: {{id}}'
          smtp.template: /srv/salt/templates/email.j2

       Also you need to create  additional  file  /srv/salt/templates/email.j2  with  email  body
       template:

          act: {{act}}
          id: {{id}}
          result: {{result}}

       This  configuration  enables  Salt  Master  to  send  an email when accepting or rejecting
       minions keys.

       salt.returners.smtp_return.event_return(events)
              Return event data via SMTP

       salt.returners.smtp_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.smtp_return.returner(ret)
              Send an email with the data

   salt.returners.splunk module
       Send json response data to Splunk via the HTTP  Event  Collector  Requires  the  following
       config values to be specified in config or pillar:

          splunk_http_forwarder:
            token: <splunk_http_forwarder_token>
            indexer: <hostname/IP of Splunk indexer>
            sourcetype: <Destination sourcetype for data>
            index: <Destination index for data>

       Run a test by using salt-call test.ping --return splunk

       Written by Scott Pack (github.com/scottjpack)

       class    salt.returners.splunk.http_event_collector(token,   http_event_server,   host='',
       http_event_port='8088', http_event_server_ssl=True, max_bytes=100000)

              batchEvent(payload, eventtime='')

              flushBatch()

              sendEvent(payload, eventtime='')

       salt.returners.splunk.returner(ret)
              Send a message to Splunk via the HTTP Event Collector

   salt.returners.sqlite3
       Insert minion return data into a sqlite3 database

       maintainer
              Mickey Malone <mickey.malone@gmail.com>

       maturity
              New

       depends
              None

       platform
              All

       Sqlite3 is a serverless database that lives in a  single  file.   In  order  to  use  this
       returner  the  database  file  must  exist,  have  the  appropriate schema defined, and be
       accessible to the user whom the minion process is running as. This returner  requires  the
       following values configured in the master or minion config:

          sqlite3.database: /usr/lib/salt/salt.db
          sqlite3.timeout: 5.0

       Alternative  configuration  values can be used by prefacing the configuration.  Any values
       not found in the alternative configuration will be pulled from the default location:

          alternative.sqlite3.database: /usr/lib/salt/salt.db
          alternative.sqlite3.timeout: 5.0

       Use the commands to create the sqlite3 database and tables:

          sqlite3 /usr/lib/salt/salt.db << EOF
          --
          -- Table structure for table 'jids'
          --

          CREATE TABLE jids (
            jid TEXT PRIMARY KEY,
            load TEXT NOT NULL
            );

          --
          -- Table structure for table 'salt_returns'
          --

          CREATE TABLE salt_returns (
            fun TEXT KEY,
            jid TEXT KEY,
            id TEXT KEY,
            fun_args TEXT,
            date TEXT NOT NULL,
            full_ret TEXT NOT NULL,
            success TEXT NOT NULL
            );
          EOF

       To use the sqlite returner, append ‘–return sqlite3’ to the salt command.

          salt '*' test.ping --return sqlite3

       To use the alternative configuration, append  ‘–return_config  alternative’  to  the  salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return sqlite3 --return_config alternative

       To  override  individual configuration items, append –return_kwargs ‘{“key:”: “value”}’ to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return sqlite3 --return_kwargs '{"db": "/var/lib/salt/another-salt.db"}'

       salt.returners.sqlite3_return.get_fun(fun)
              Return a dict of the last function called for all minions

       salt.returners.sqlite3_return.get_jid(jid)
              Return the information returned from a specified jid

       salt.returners.sqlite3_return.get_jids()
              Return a list of all job ids

       salt.returners.sqlite3_return.get_load(jid)
              Return the load from a specified jid

       salt.returners.sqlite3_return.get_minions()
              Return a list of minions

       salt.returners.sqlite3_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.sqlite3_return.returner(ret)
              Insert minion return data into the sqlite3 database

       salt.returners.sqlite3_return.save_load(jid, load, minions=None)
              Save the load to the specified jid

   salt.returners.syslog_return
       Return data to the host operating system’s syslog facility

       To use the syslog returner, append ‘–return syslog’ to the salt command.

          salt '*' test.ping --return syslog

       The following fields can be set in the minion conf file:

          syslog.level (optional, Default: LOG_INFO)
          syslog.facility (optional, Default: LOG_USER)
          syslog.tag (optional, Default: salt-minion)
          syslog.options (list, optional, Default: [])

       Available levels, facilities, and options can be found in the syslog docs for your  python
       version.

       NOTE:
          The  default  tag  comes  from  sys.argv[0] which is usually “salt-minion” but could be
          different based on the specific environment.

       Configuration example:

          syslog.level: 'LOG_ERR'
          syslog.facility: 'LOG_DAEMON'
          syslog.tag: 'mysalt'
          syslog.options:
            - LOG_PID

       Of course you can also nest the options:

          syslog:
            level: 'LOG_ERR'
            facility: 'LOG_DAEMON'
            tag: 'mysalt'
            options:
              - LOG_PID

       Alternative configuration values can be used by prefacing the  configuration.  Any  values
       not found in the alternative configuration will be pulled from the default location:

          alternative.syslog.level: 'LOG_WARN'
          alternative.syslog.facility: 'LOG_NEWS'

       To  use  the  alternative  configuration,  append  --return_config alternative to the salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return syslog --return_config alternative

       To override individual configuration items, append –return_kwargs ‘{“key:”:  “value”}’  to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return syslog --return_kwargs '{"level": "LOG_DEBUG"}'

       NOTE:
          Syslog  server  implementations  may have limits on the maximum record size received by
          the client. This may lead to job return data being truncated  in  the  syslog  server’s
          logs.  For  example, for rsyslog on RHEL-based systems, the default maximum record size
          is approximately 2KB (which return data can easily exceed).  This  is  configurable  in
          rsyslog.conf  via  the  $MaxMessageSize  config  parameter.  Please consult your syslog
          implmentation’s documentation to determine how to adjust this limit.

       salt.returners.syslog_return.prep_jid(nocache=False, passed_jid=None)
              Do any work necessary to prepare a JID, including sending a custom id

       salt.returners.syslog_return.returner(ret)
              Return data to the local syslog

   salt.returners.telegram_return
       Return salt data via Telegram.

       The following fields can be set in the minion conf file:

          telegram.chat_id (required)
          telegram.token (required)

       Telegram settings may also be configured as:

          telegram:
            chat_id: 000000000
            token: 000000000:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

       To use the Telegram return, append ‘–return telegram’ to the salt command.

          salt '*' test.ping --return telegram

       salt.returners.telegram_return.returner(ret)
              Send a Telegram message with the data.

              Parameters
                     ret – The data to be sent.

              Returns
                     Boolean if message was sent successfully.

   salt.returners.xmpp_return
       Return salt data via xmpp

       depends
              sleekxmpp >= 1.3.1

       The following fields can be set in the minion conf file:

          xmpp.jid (required)
          xmpp.password (required)
          xmpp.recipient (required)
          xmpp.profile (optional)

       Alternative configuration values can be used by prefacing the configuration.   Any  values
       not found in the alternative configuration will be pulled from the default location:

          xmpp.jid
          xmpp.password
          xmpp.recipient
          xmpp.profile

       XMPP settings may also be configured as:

          xmpp:
              jid: user@xmpp.domain.com/resource
              password: password
              recipient: user@xmpp.example.com

          alternative.xmpp:
              jid: user@xmpp.domain.com/resource
              password: password
              recipient: someone@xmpp.example.com

          xmpp_profile:
              xmpp.jid: user@xmpp.domain.com/resource
              xmpp.password: password

          xmpp:
              profile: xmpp_profile
              recipient: user@xmpp.example.com

          alternative.xmpp:
              profile: xmpp_profile
              recipient: someone-else@xmpp.example.com

       To use the XMPP returner, append ‘–return xmpp’ to the salt command.

          salt '*' test.ping --return xmpp

       To  use  the  alternative  configuration,  append ‘–return_config alternative’ to the salt
       command.

       New in version 2015.5.0.

          salt '*' test.ping --return xmpp --return_config alternative

       To override individual configuration items, append –return_kwargs ‘{“key:”:  “value”}’  to
       the salt command.

       New in version 2016.3.0.

          salt '*' test.ping --return xmpp --return_kwargs '{"recipient": "someone-else@xmpp.example.com"}'

       class salt.returners.xmpp_return.SendMsgBot(jid, password, recipient, msg)

              start(event)

       salt.returners.xmpp_return.returner(ret)
              Send an xmpp message with the data

   salt.returners.zabbix_return module
       Return salt data to Zabbix

       The following Type: “Zabbix trapper” with “Type of information” Text items are required:

          Key: salt.trap.info
          Key: salt.trap.average
          Key: salt.trap.warning
          Key: salt.trap.high
          Key: salt.trap.disaster

       To use the Zabbix returner, append ‘–return zabbix’ to the salt command. ex:

          salt '*' test.ping --return zabbix

       salt.returners.zabbix_return.returner(ret)

       salt.returners.zabbix_return.zabbix_send(key, host, output)

       salt.returners.zabbix_return.zbx()

   Renderers
       The  Salt  state  system  operates by gathering information from common data types such as
       lists, dictionaries, and strings that would be familiar to any developer.

       SLS files are translated from whatever data templating format they  are  written  in  back
       into Python data types to be consumed by Salt.

       By  default  SLS  files are rendered as Jinja templates and then parsed as YAML documents.
       But since the only thing the state system cares about is raw data, the SLS  files  can  be
       any structured format that can be dreamed up.

       Currently there is support for Jinja + YAML, Mako + YAML, Wempy + YAML, Jinja + json, Mako
       + json and Wempy + json.

       Renderers can be written to support any template type. This means  that  the  Salt  states
       could  be  managed  by  XML  files,  HTML  files,  Puppet files, or any format that can be
       translated into the Pythonic data structure used by the state system.

   Multiple Renderers
       A default renderer is selected in the master configuration file by providing  a  value  to
       the renderer key.

       When evaluating an SLS, more than one renderer can be used.

       When rendering SLS files, Salt checks for the presence of a Salt-specific shebang line.

       The  shebang  line directly calls the name of the renderer as it is specified within Salt.
       One of the most common reasons to use multiple renderers  is  to  use  the  Python  or  py
       renderer.

       Below, the first line is a shebang that references the py renderer.

          #!py

          def run():
              '''
              Install the python-mako package
              '''
              return {'include': ['python'],
                      'python-mako': {'pkg': ['installed']}}

   Composing Renderers
       A  renderer  can  be  composed  from  other  renderers  by  connecting them in a series of
       pipes(|).

       In fact, the default Jinja + YAML renderer is implemented by connecting a YAML renderer to
       a Jinja renderer. Such renderer configuration is specified as: jinja | yaml.

       Other renderer combinations are possible:

          yaml   i.e, just YAML, no templating.

          mako | yaml
                 pass  the  input  to  the  mako renderer, whose output is then fed into the yaml
                 renderer.

          jinja | mako | yaml
                 This one allows you to use both jinja and mako templating syntax  in  the  input
                 and then parse the final rendered output as YAML.

       The following is a contrived example SLS file using the jinja | mako | yaml renderer:

          #!jinja|mako|yaml

          An_Example:
            cmd.run:
              - name: |
                  echo "Using Salt ${grains['saltversion']}" \
                       "from path {{grains['saltpath']}}."
              - cwd: /

          <%doc> ${...} is Mako's notation, and so is this comment. </%doc>
          {#     Similarly, {{...}} is Jinja's notation, and so is this comment. #}

       For backward compatibility, jinja | yaml can also be written as yaml_jinja, and similarly,
       the yaml_mako,  yaml_wempy,  json_jinja,  json_mako,  and  json_wempy  renderers  are  all
       supported.

       Keep  in  mind  that not all renderers can be used alone or with any other renderers.  For
       example, the template renderers shouldn’t be used alone as their outputs are just strings,
       which  still  need  to  be  parsed  by  another  renderer to turn them into highstate data
       structures.

       For example, it doesn’t make sense to specify yaml | jinja because the output of the  YAML
       renderer  is  a  highstate  data structure (a dict in Python), which cannot be used as the
       input to a template renderer. Therefore, when combining renderers, you  should  know  what
       each renderer accepts as input and what it returns as output.

   Writing Renderers
       A custom renderer must be a Python module placed in the renderers directory and the module
       implement the render function.

       The render function will be passed the path of the SLS file as an argument.

       The purpose of the render function is to parse the passed file and to  return  the  Python
       data structure derived from the file.

       Custom  renderers must be placed in a _renderers directory within the file_roots specified
       by the master config file.

       Custom renderers are distributed when any of the following are run:

       · state.apply

       · saltutil.sync_renderers

       · saltutil.sync_all

       Any custom renderers which have been synced to a minion, that are named the same as one of
       Salt’s default set of renderers, will take the place of the default renderer with the same
       name.

   Examples
       The best place to find examples of renderers is in the Salt source code.

       Documentation for renderers included with Salt can be found here:

       https://github.com/saltstack/salt/blob/develop/salt/renderers

       Here is a simple YAML renderer example:

          import salt.utils.yaml
          from salt.utils.yamlloader import SaltYamlSafeLoader
          from salt.ext import six

          def render(yaml_data, saltenv='', sls='', **kws):
              if not isinstance(yaml_data, six.string_types):
                  yaml_data = yaml_data.read()
              data = salt.utils.yaml.safe_load(yaml_data)
              return data if data else {}

   Full List of Renderers
   renderer modules
                             ┌──────────┬──────────────────────────────────┐
                             │cheetah   │ Cheetah Renderer for Salt        │
                             ├──────────┼──────────────────────────────────┤
                             │dson      │ DSON Renderer for Salt           │
                             ├──────────┼──────────────────────────────────┤
                             │genshi    │ Genshi Renderer for Salt         │
                             ├──────────┼──────────────────────────────────┤
                             │gpg       │ Renderer that will  decrypt  GPG │
                             │          │ ciphers                          │
                             ├──────────┼──────────────────────────────────┤
                             │hjson     │ Hjson    Renderer    for    Salt │
                             │          │ http://laktak.github.io/hjson/   │
                             ├──────────┼──────────────────────────────────┤
                             │jinja     │ Jinja loading utils to enable  a │
                             │          │ more  powerful backend for jinja │
                             │          │ templates                        │
                             ├──────────┼──────────────────────────────────┤
                             │json      │ JSON Renderer for Salt           │
                             └──────────┴──────────────────────────────────┘

                             │json5     │ JSON5 Renderer for Salt          │
                             ├──────────┼──────────────────────────────────┤
                             │mako      │ Mako Renderer for Salt           │
                             ├──────────┼──────────────────────────────────┤
                             │msgpack   │                                  │
                             ├──────────┼──────────────────────────────────┤
                             │pass      │ Pass Renderer for Salt           │
                             ├──────────┼──────────────────────────────────┤
                             │py        │ Pure python state renderer       │
                             ├──────────┼──────────────────────────────────┤
                             │pydsl     │ A Python-based DSL               │
                             ├──────────┼──────────────────────────────────┤
                             │pyobjects │ Python renderer that includes  a │
                             │          │ Pythonic Object based interface  │
                             ├──────────┼──────────────────────────────────┤
                             │stateconf │ A flexible renderer that takes a │
                             │          │ templating  engine  and  a  data │
                             │          │ format                           │
                             ├──────────┼──────────────────────────────────┤
                             │wempy     │                                  │
                             ├──────────┼──────────────────────────────────┤
                             │yaml      │ YAML Renderer for Salt           │
                             ├──────────┼──────────────────────────────────┤
                             │yamlex    │                                  │
                             └──────────┴──────────────────────────────────┘

   salt.renderers.cheetah
       Cheetah Renderer for Salt

       salt.renderers.cheetah.render(cheetah_data, saltenv='base', sls='', method='xml', **kws)
              Render a Cheetah template.

              Return type
                     A Python data structure

   salt.renderers.dson
       DSON Renderer for Salt

       This  renderer is intended for demonstration purposes. Information on the DSON spec can be
       found here.

       This renderer requires Dogeon (installable via pip)

       salt.renderers.dson.render(dson_input, saltenv='base', sls='', **kwargs)
              Accepts DSON data as a string or as a file object and  runs  it  through  the  JSON
              parser.

              Return type
                     A Python data structure

   salt.renderers.genshi
       Genshi Renderer for Salt

       salt.renderers.genshi.render(genshi_data, saltenv='base', sls='', method='xml', **kws)
              Render a Genshi template. A method should be passed in as part of the kwargs. If no
              method is passed in, xml is assumed. Valid methods are:

              Note that the text method will call NewTextTemplate. If oldtext is desired, it must
              be called explicitly

              Return type
                     A Python data structure

   salt.renderers.gpg
       Renderer that will decrypt GPG ciphers

       Any  key  in  the  SLS  file can be a GPG cipher, and this renderer will decrypt it before
       passing it off to Salt. This allows you to safely store secrets in source control, in such
       a  way that only your Salt master can decrypt them and distribute them only to the minions
       that need them.

       The typical use-case would be to use ciphers in your pillar data, and keep a secret key on
       your  master.  You can put the public key in source control so that developers can add new
       secrets quickly and easily.

       This renderer requires the gpg binary. No python libraries are required as of the 2015.8.0
       release.

   Setup
       To set things up, first generate a keypair. On the master, run the following:

          # mkdir -p /etc/salt/gpgkeys
          # chmod 0700 /etc/salt/gpgkeys
          # gpg --gen-key --homedir /etc/salt/gpgkeys

       Do  not  supply  a  password  for  the  keypair,  and use a name that makes sense for your
       application. Be sure to back up the gpgkeys directory someplace safe!

       NOTE:
          Unfortunately, there are some scenarios - for example, on virtual machines which  don’t
          have  real  hardware - where insufficient entropy causes key generation to be extremely
          slow. In these cases, there are usually means of  increasing  the  system  entropy.  On
          virtualised  Linux  systems,  this  can  often  be achieved by installing the rng-tools
          package.

   Export the Public Key
          # gpg --homedir /etc/salt/gpgkeys --armor --export <KEY-NAME> > exported_pubkey.gpg

   Import the Public Key
       To encrypt secrets, copy the public key to your local machine and run:

          $ gpg --import exported_pubkey.gpg

       To generate a cipher from a secret:

          $ echo -n "supersecret" | gpg --armor --batch --trust-model always --encrypt -r <KEY-name>

       To apply the renderer on a file-by-file basis add the following line to  the  top  of  any
       pillar with gpg data in it:

          #!yaml|gpg

       Now  with  your renderer configured, you can include your ciphers in your pillar data like
       so:

          #!yaml|gpg

          a-secret: |
            -----BEGIN PGP MESSAGE-----
            Version: GnuPG v1

            hQEMAweRHKaPCfNeAQf9GLTN16hCfXAbPwU6BbBK0unOc7i9/etGuVc5CyU9Q6um
            QuetdvQVLFO/HkrC4lgeNQdM6D9E8PKonMlgJPyUvC8ggxhj0/IPFEKmrsnv2k6+
            cnEfmVexS7o/U1VOVjoyUeliMCJlAz/30RXaME49Cpi6No2+vKD8a4q4nZN1UZcG
            RhkhC0S22zNxOXQ38TBkmtJcqxnqT6YWKTUsjVubW3bVC+u2HGqJHu79wmwuN8tz
            m4wBkfCAd8Eyo2jEnWQcM4TcXiF01XPL4z4g1/9AAxh+Q4d8RIRP4fbw7ct4nCJv
            Gr9v2DTF7HNigIMl4ivMIn9fp+EZurJNiQskLgNbktJGAeEKYkqX5iCuB1b693hJ
            FKlwHiJt5yA8X2dDtfk8/Ph1Jx2TwGS+lGjlZaNqp3R1xuAZzXzZMLyZDe5+i3RJ
            skqmFTbOiA===Eqsm

            -----END PGP MESSAGE-----
   Encrypted CLI Pillar Data
       New in version 2016.3.0.

       Functions like state.highstate and state.sls allow for pillar data to  be  passed  on  the
       CLI.

          salt myminion state.highstate pillar="{'mypillar': 'foo'}"

       Starting  with the 2016.3.0 release of Salt, it is now possible for this pillar data to be
       GPG-encrypted, and to use the GPG renderer to decrypt it.

   Replacing Newlines
       To pass encrypted pillar data on the CLI, the ciphertext must have its  newlines  replaced
       with  a literal backslash-n (\n), as newlines are not supported within Salt CLI arguments.
       There are a number of ways to do this:

       With awk or Perl:

          # awk
          ciphertext=`echo -n "supersecret" | gpg --armor --batch --trust-model always --encrypt -r user@domain.com | awk '{printf "%s\\n",$0} END {print ""}'`
          # Perl
          ciphertext=`echo -n "supersecret" | gpg --armor --batch --trust-model always --encrypt -r user@domain.com | perl -pe 's/\n/\\n/g'`

       With Python:

          import subprocess

          secret, stderr = subprocess.Popen(
              ['gpg', '--armor', '--batch', '--trust-model', 'always', '--encrypt',
               '-r', 'user@domain.com'],
              stdin=subprocess.PIPE,
              stdout=subprocess.PIPE,
              stderr=subprocess.PIPE).communicate(input='supersecret')

          if secret:
              print(secret.replace('\n', r'\n'))
          else:
              raise ValueError('No ciphertext found: {0}'.format(stderr))

          ciphertext=`python /path/to/script.py`

       The ciphertext can be included in the CLI pillar data like so:

          salt myminion state.sls secretstuff pillar_enc=gpg pillar="{secret_pillar: '$ciphertext'}"

       The pillar_enc=gpg argument tells Salt that there is GPG-encrypted pillar  data,  so  that
       the  CLI  pillar  data  is passed through the GPG renderer, which will iterate recursively
       though the CLI pillar dictionary to decrypt any encrypted values.

   Encrypting the Entire CLI Pillar Dictionary
       If several values need to be encrypted, it may be more convenient to  encrypt  the  entire
       CLI pillar dictionary. Again, this can be done in several ways:

       With awk or Perl:

          # awk
          ciphertext=`echo -n "{'secret_a': 'CorrectHorseBatteryStaple', 'secret_b': 'GPG is fun!'}" | gpg --armor --batch --trust-model always --encrypt -r user@domain.com | awk '{printf "%s\\n",$0} END {print ""}'`
          # Perl
          ciphertext=`echo -n "{'secret_a': 'CorrectHorseBatteryStaple', 'secret_b': 'GPG is fun!'}" | gpg --armor --batch --trust-model always --encrypt -r user@domain.com | perl -pe 's/\n/\\n/g'`

       With Python:

          import subprocess

          pillar_data = {'secret_a': 'CorrectHorseBatteryStaple',
                         'secret_b': 'GPG is fun!'}

          secret, stderr = subprocess.Popen(
              ['gpg', '--armor', '--batch', '--trust-model', 'always', '--encrypt',
               '-r', 'user@domain.com'],
              stdin=subprocess.PIPE,
              stdout=subprocess.PIPE,
              stderr=subprocess.PIPE).communicate(input=repr(pillar_data))

          if secret:
              print(secret.replace('\n', r'\n'))
          else:
              raise ValueError('No ciphertext found: {0}'.format(stderr))

          ciphertext=`python /path/to/script.py`

       With the entire pillar dictionary now encrypted, it can be included in the CLI pillar data
       like so:

          salt myminion state.sls secretstuff pillar_enc=gpg pillar="$ciphertext"

       salt.renderers.gpg.render(gpg_data, saltenv='base', sls='', argline='', **kwargs)
              Create a gpg object given a gpg_keydir, and then use it to try to decrypt the  data
              to be rendered.

   salt.renderers.hjson
       Hjson Renderer for Salt http://laktak.github.io/hjson/

       salt.renderers.hjson.render(hjson_data, saltenv='base', sls='', **kws)
              Accepts HJSON as a string or as a file object and runs it through the HJSON parser.

              Return type
                     A Python data structure

   salt.renderers.jinja
       Jinja loading utils to enable a more powerful backend for jinja templates

       For Jinja usage information see Understanding Jinja.

       salt.renderers.jinja.render(template_file,     saltenv='base',     sls='',     argline='',
       context=None, tmplpath=None, **kws)
              Render the template_file, passing the functions and grains into the Jinja rendering
              system.

              Return type
                     string

       class salt.utils.jinja.SerializerExtension(environment)
              Yaml and Json manipulation.

              Format filters

              Allows jsonifying or yamlifying any data structure. For example, this dataset:

                 data = {
                     'foo': True,
                     'bar': 42,
                     'baz': [1, 2, 3],
                     'qux': 2.0
                 }

                 yaml = {{ data|yaml }}
                 json = {{ data|json }}
                 python = {{ data|python }}
                 xml  = {{ {'root_node': data}|xml }}

              will be rendered as:

                 yaml = {bar: 42, baz: [1, 2, 3], foo: true, qux: 2.0}
                 json = {"baz": [1, 2, 3], "foo": true, "bar": 42, "qux": 2.0}
                 python = {'bar': 42, 'baz': [1, 2, 3], 'foo': True, 'qux': 2.0}
                 xml = """<<?xml version="1.0" ?>
                          <root_node bar="42" foo="True" qux="2.0">
                           <baz>1</baz>
                           <baz>2</baz>
                           <baz>3</baz>
                          </root_node>"""

              The   yaml   filter   takes   an  optional  flow_style  parameter  to  control  the
              default-flow-style parameter of the YAML dumper.

                 {{ data|yaml(False) }}

              will be rendered as:

                 bar: 42
                 baz:
                   - 1
                   - 2
                   - 3
                 foo: true
                 qux: 2.0

              Load filters

              Strings and variables can be deserialized with load_yaml  and  load_json  tags  and
              filters. It allows one to manipulate data directly in templates, easily:

                 {%- set yaml_src = "{foo: it works}"|load_yaml %}
                 {%- set json_src = "{'bar': 'for real'}"|load_json %}
                 Dude, {{ yaml_src.foo }} {{ json_src.bar }}!

              will be rendered as:

                 Dude, it works for real!

              Load tags

              Salt implements load_yaml and load_json tags. They work like the import tag, except
              that the document is also deserialized.

              Syntaxes are {% load_yaml as [VARIABLE] %}[YOUR DATA]{% endload %} and {% load_json
              as [VARIABLE] %}[YOUR DATA]{% endload %}

              For example:

                 {% load_yaml as yaml_src %}
                     foo: it works
                 {% endload %}
                 {% load_json as json_src %}
                     {
                         "bar": "for real"
                     }
                 {% endload %}
                 Dude, {{ yaml_src.foo }} {{ json_src.bar }}!

              will be rendered as:

                 Dude, it works for real!

              Import tags

              External files can be imported and made available as a Jinja variable.

                 {% import_yaml "myfile.yml" as myfile %}
                 {% import_json "defaults.json" as defaults %}
                 {% import_text "completeworksofshakespeare.txt" as poems %}

              Catalog

              import_* and load_* tags will automatically expose their target variable to import.
              This feature makes catalog of data to handle.

              for example:

                 # doc1.sls
                 {% load_yaml as var1 %}
                     foo: it works
                 {% endload %}
                 {% load_yaml as var2 %}
                     bar: for real
                 {% endload %}

                 # doc2.sls
                 {% from "doc1.sls" import var1, var2 as local2 %}
                 {{ var1.foo }} {{ local2.bar }}

              ** Escape Filters **

              New in version 2017.7.0.

              Allows escaping of  strings  so  they  can  be  interpreted  literally  by  another
              function.

              For example:

                 regex_escape = {{ 'https://example.com?foo=bar%20baz' | regex_escape }}

              will be rendered as:

                 regex_escape = https\:\/\/example\.com\?foo\=bar\%20baz

              ** Set Theory Filters **

              New in version 2017.7.0.

              Performs set math using Jinja filters.

              For example:

                 unique = {{ ['foo', 'foo', 'bar'] | unique }}

              will be rendered as:

                 unique = ['foo', 'bar']

   salt.renderers.json
       JSON Renderer for Salt

       salt.renderers.json.render(json_data, saltenv='base', sls='', **kws)
              Accepts JSON as a string or as a file object and runs it through the JSON parser.

              Return type
                     A Python data structure

   salt.renderers.json5
       JSON5 Renderer for Salt

       New in version 2016.3.0.

       JSON5 is an unofficial extension to JSON. See http://json5.org/ for more information.

       This renderer requires the json5 python bindings, installable via pip.

       salt.renderers.json5.render(json_data, saltenv='base', sls='', **kws)
              Accepts JSON as a string or as a file object and runs it through the JSON parser.

              Return type
                     A Python data structure

   salt.renderers.mako
       Mako Renderer for Salt

       salt.renderers.mako.render(template_file,     saltenv='base',     sls='',    context=None,
       tmplpath=None, **kws)
              Render the template_file, passing the functions and grains into the Mako  rendering
              system.

              Return type
                     string

   salt.renderers.msgpack
       salt.renderers.msgpack.render(msgpack_data, saltenv='base', sls='', **kws)
              Accepts  a message pack string or a file object, renders said data back to a python
              dict.

              Return type
                     A Python data structure

   salt.renderers.pass module
   Pass Renderer for Salt
       [pass](https://www.passwordstore.org/)

       New in version 2017.7.0.

   Setup
       NOTE:
          <user> needs to be replaced with the user salt-master will be running as

       1. Have private gpg loaded into user’s gpg keyring. Example:

             load_private_gpg_key:
               cmd.run:
                 - name: gpg --import <location_of_private_gpg_key>
                 - unless: gpg --list-keys '<gpg_name>'

       2. Said private key’s public key should have been used when encrypting pass  entries  that
          are of interest for pillar data.

       3. Fetch and keep local pass git repo up-to-date

             update_pass:
               git.latest:
                 - force_reset: True
                 - name: <git_repo>
                 - target: /<user>/.password-store
                 - identity: <location_of_ssh_private_key>
                 - require:
                   - cmd: load_private_gpg_key

       4. Install pass binary

             pass:
               pkg.installed

       salt.renderers.pass.render(pass_info, saltenv='base', sls='', argline='', **kwargs)
              Fetch secret from pass based on pass_path

   salt.renderers.py
   Pure python state renderer
       To  use  this  renderer,  the  SLS file should contain a function called run which returns
       highstate data.

       The highstate  data  is  a  dictionary  containing  identifiers  as  keys,  and  execution
       dictionaries as values. For example the following state declaration in YAML:

          common_packages:
            pkg.installed:
             - pkgs:
                - curl
                - vim

       tranlastes to:

          {'common_packages': {'pkg.installed': [{'pkgs': ['curl', 'vim']}]}}

       In  this  module,  a  few  objects  are defined for you, giving access to Salt’s execution
       functions, grains, pillar, etc. They are:

       · __salt__ - Execution functions (i.e.  __salt__['test.echo']('foo'))

       · __grains__ - Grains (i.e. __grains__['os'])

       · __pillar__ - Pillar data (i.e. __pillar__['foo'])

       · __opts__ - Minion configuration options

       · __env__ - The effective salt fileserver environment (i.e. base). Also referred to  as  a
         “saltenv”.  __env__ should not be modified in a pure python SLS file. To use a different
         environment, the environment should be set when executing the state. This can be done in
         a couple different ways:

         · Using  the  saltenv  argument  on  the  salt  CLI (i.e. salt '*' state.sls foo.bar.baz
           saltenv=env_name).

         · By adding a saltenv argument to an individual state within  the  SLS  file.  In  other
           words, adding a line like this to the state’s data structure: {'saltenv': 'env_name'}

       · __sls__  - The SLS path of the file. For example, if the root of the base environment is
         /srv/salt, and the SLS file is /srv/salt/foo/bar/baz.sls, then __sls__ in that file will
         be foo.bar.baz.

       When  writing  a  reactor SLS file the global context data (same as context {{ data }} for
       states written with Jinja  +  YAML)  is  available.  The  following  YAML  +  Jinja  state
       declaration:

          {% if data['id'] == 'mysql1' %}
          highstate_run:
            local.state.apply:
              - tgt: mysql1
          {% endif %}

       translates to:

          if data['id'] == 'mysql1':
              return {'highstate_run': {'local.state.apply': [{'tgt': 'mysql1'}]}}

   Full Example
           #!py

           def run():
               config = {}

               if __grains__['os'] == 'Ubuntu':
                   user = 'ubuntu'
                   group = 'ubuntu'
                   home = '/home/{0}'.format(user)
               else:
                   user = 'root'
                   group = 'root'
                   home = '/root/'

               config['s3cmd'] = {
                   'pkg': [
                       'installed',
                       {'name': 's3cmd'},
                   ],
               }

               config[home + '/.s3cfg'] = {
                   'file.managed': [
                       {'source': 'salt://s3cfg/templates/s3cfg'},
                       {'template': 'jinja'},
                       {'user': user},
                       {'group': group},
                       {'mode': 600},
                       {'context': {
                           'aws_key': __pillar__['AWS_ACCESS_KEY_ID'],
                           'aws_secret_key': __pillar__['AWS_SECRET_ACCESS_KEY'],
                           },
                       },
                   ],
               }

               return config

       salt.renderers.py.render(template, saltenv='base', sls='', tmplpath=None, **kws)
              Render the python module’s components

              Return type
                     string

   salt.renderers.pydsl
       A Python-based DSL

       maintainer
              Jack Kuan <kjkuan@gmail.com>

       maturity
              new

       platform
              all

       The  pydsl renderer allows one to author salt formulas (.sls files) in pure Python using a
       DSL that’s easy to write and easy to read. Here’s an example:

          #!pydsl

          apache = state('apache')
          apache.pkg.installed()
          apache.service.running()
          state('/var/www/index.html') \
              .file('managed',
                    source='salt://webserver/index.html') \
              .require(pkg='apache')

       Notice that any Python code is allow in the file as it’s really a Python  module,  so  you
       have  the full power of Python at your disposal. In this module, a few objects are defined
       for you, including the usual (with __ added) __salt__ dictionary, __grains__,  __pillar__,
       __opts__, __env__, and __sls__, plus a few more:
          __file__
              local file system path to the sls module.

          __pydsl__
              Salt PyDSL object, useful for configuring DSL behavior per sls rendering.

          include
              Salt PyDSL function for creating include-declaration’s.

          extend
              Salt PyDSL function for creating extend-declaration’s.

          state
              Salt PyDSL function for creating ID-declaration’s.

       A  state  ID-declaration  is created with a state(id) function call.  Subsequent state(id)
       call with the same id returns the same object. This singleton access  pattern  applies  to
       all declaration objects created with the DSL.

          state('example')
          assert state('example') is state('example')
          assert state('example').cmd is state('example').cmd
          assert state('example').cmd.running is state('example').cmd.running

       The id argument is optional. If omitted, an UUID will be generated and used as the id.

       state(id)  returns  an  object  under  which  you can create a state-declaration object by
       accessing an attribute named after any state module available in Salt.

          state('example').cmd
          state('example').file
          state('example').pkg
          ...

       Then, a function-declaration object can be created from a state-declaration object by  one
       of the following two ways:

       1. by calling a method named after the state function on the state-declaration object.

          state('example').file.managed(...)

       2. by  directly  calling  the attribute named for the state-declaration, and supplying the
          state function name as the first argument.

          state('example').file('managed', ...)

       With either way of creating a function-declaration object, any  function-arg-declaration’s
       can be passed as keyword arguments to the call. Subsequent calls of a function-declaration
       will update the arg declarations.

          state('example').file('managed', source='salt://webserver/index.html')
          state('example').file.managed(source='salt://webserver/index.html')

       As a shortcut, the special name argument can  also  be  passed  as  the  first  or  second
       positional  argument depending on the first or second way of calling the state-declaration
       object. In the following two examples ls -la is the name argument.

          state('example').cmd.run('ls -la', cwd='/')
          state('example').cmd('run', 'ls -la', cwd='/')

       Finally, a requisite-declaration object with its requisite-reference’s can be  created  by
       invoking   one   of   the   requisite   methods   (see   State  Requisites)  on  either  a
       function-declaration object  or  a  state-declaration  object.   The  return  value  of  a
       requisite  call  is also a function-declaration object, so you can chain several requisite
       calls together.

       Arguments to a requisite call can be a list of state-declaration objects and/or a  set  of
       keyword  arguments whose names are state modules and values are IDs of ID-declaration’s or
       names of name-declaration’s.

          apache2 = state('apache2')
          apache2.pkg.installed()
          state('libapache2-mod-wsgi').pkg.installed()

          # you can call requisites on function declaration
          apache2.service.running() \
                         .require(apache2.pkg,
                                  pkg='libapache2-mod-wsgi') \
                         .watch(file='/etc/apache2/httpd.conf')

          # or you can call requisites on state declaration.
          # this actually creates an anonymous function declaration object
          # to add the requisites.
          apache2.service.require(state('libapache2-mod-wsgi').pkg,
                                  pkg='apache2') \
                         .watch(file='/etc/apache2/httpd.conf')

          # we still need to set the name of the function declaration.
          apache2.service.running()

       include-declaration  objects  can  be   created   with   the   include   function,   while
       extend-declaration  objects  can  be created with the extend function, whose arguments are
       just function-declaration objects.

          include('edit.vim', 'http.server')
          extend(state('apache2').service.watch(file='/etc/httpd/httpd.conf')

       The include function, by default, causes the included sls file to be rendered as  soon  as
       the  include  function  is called. It returns a list of rendered module objects; sls files
       not  rendered  with  the  pydsl  renderer  return  None’s.   This  behavior   creates   no
       include-declaration’s in the resulting high state data structure.

          import types

          # including multiple sls returns a list.
          _, mod = include('a-non-pydsl-sls', 'a-pydsl-sls')

          assert _ is None
          assert isinstance(slsmods[1], types.ModuleType)

          # including a single sls returns a single object
          mod = include('a-pydsl-sls')

          # myfunc is a function that calls state(...) to create more states.
          mod.myfunc(1, 2, "three")

       Notice  how  you  can define a reusable function in your pydsl sls module and then call it
       via the module returned by include.

       It’s still possible to do late includes by passing the delayed=True  keyword  argument  to
       include.

          include('edit.vim', 'http.server', delayed=True)

       Above  will just create a include-declaration in the rendered result, and such call always
       returns None.

   Special integration with the cmd state
       Taking advantage of rendering a Python module, PyDSL allows you to declare  a  state  that
       calls a pre-defined Python function when the state is executed.

          greeting = "hello world"
          def helper(something, *args, **kws):
              print greeting                # hello world
              print something, args, kws    # test123 ['a', 'b', 'c'] {'x': 1, 'y': 2}

          state().cmd.call(helper, "test123", 'a', 'b', 'c', x=1, y=2)

       The  cmd.call  state function takes care of calling our helper function with the arguments
       we specified in the states, and translates  the  return  value  of  our  function  into  a
       structure expected by the state system.  See salt.states.cmd.call() for more information.

   Implicit ordering of states
       Salt  states are explicitly ordered via requisite-declaration’s.  However, with pydsl it’s
       possible to let the renderer track the order of creation for function-declaration objects,
       and  implicitly  add  require  requisites  for  your  states to enforce the ordering. This
       feature is enabled by setting the ordered option on __pydsl__.

       NOTE:
          this feature is only available if your minions are using Python >= 2.7.

          include('some.sls.file')

          A = state('A').cmd.run(cwd='/var/tmp')
          extend(A)

          __pydsl__.set(ordered=True)

          for i in range(10):
              i = six.text_type(i)
              state(i).cmd.run('echo '+i, cwd='/')
          state('1').cmd.run('echo one')
          state('2').cmd.run(name='echo two')

       Notice that the ordered option needs to be set after any extend calls.  This is to prevent
       pydsl from tracking the creation of a state function that’s passed to an extend call.

       Above  example  should  create states from 0 to 9 that will output 0, one, two, 3, … 9, in
       that order.

       It’s important to know that pydsl tracks the creations  of  function-declaration  objects,
       and  automatically adds a require requisite to a function-declaration object that requires
       the last function-declaration object created before it in the sls file.

       This means later calls (perhaps to update the function’s  function-arg-declaration)  to  a
       previously created function declaration will not change the order.

   Render time state execution
       When  Salt  processes  a salt formula file, the file is rendered to salt’s high state data
       representation by a renderer before the states can be executed.  In the case of the  pydsl
       renderer, the .sls file is executed as a python module as it is being rendered which makes
       it easy to execute a state at render time.  In pydsl, executing  one  or  more  states  at
       render time can be done by calling a configured ID-declaration object.

          #!pydsl

          s = state() # save for later invocation

          # configure it
          s.cmd.run('echo at render time', cwd='/')
          s.file.managed('target.txt', source='salt://source.txt')

          s() # execute the two states now

       Once  an  ID-declaration is called at render time it is detached from the sls module as if
       it was never defined.

       NOTE:
          If implicit ordering is enabled (i.e., via __pydsl__.set(ordered=True)) then the  first
          invocation of a ID-declaration object must be done before a new function-declaration is
          created.

   Integration with the stateconf renderer
       The salt.renderers.stateconf renderer offers  a  few  interesting  features  that  can  be
       leveraged by the pydsl renderer. In particular, when using with the pydsl renderer, we are
       interested in stateconf’s sls namespacing feature (via dot-prefixed id  declarations),  as
       well as, the automatic start and goal states generation.

       Now you can use pydsl with stateconf like this:

          #!pydsl|stateconf -ps

          include('xxx', 'yyy')

          # ensure that states in xxx run BEFORE states in this file.
          extend(state('.start').stateconf.require(stateconf='xxx::goal'))

          # ensure that states in yyy run AFTER states in this file.
          extend(state('.goal').stateconf.require_in(stateconf='yyy::start'))

          __pydsl__.set(ordered=True)

          ...

       -s  enables the generation of a stateconf start state, and -p lets us pipe high state data
       rendered by pydsl to stateconf. This example shows that by require-ing  or  require_in-ing
       the  included  sls’  start  or  goal states, it’s possible to ensure that the included sls
       files can be made to execute before or after a state in the including sls file.

   Importing custom Python modules
       To use a custom Python module inside a PyDSL state, place the module somewhere that it can
       be loaded by the Salt loader, such as _modules in the /srv/salt directory.

       Then, copy it to any minions as necessary by using saltutil.sync_modules.

       To  import into a PyDSL SLS, one must bypass the Python importer and insert it manually by
       getting a reference from Python’s sys.modules dictionary.

       For example:

          #!pydsl|stateconf -ps

          def main():
              my_mod = sys.modules['salt.loaded.ext.module.my_mod']

       salt.renderers.pydsl.render(template,     saltenv='base',      sls='',      tmplpath=None,
       rendered_sls=None, **kws)

   salt.renderers.pyobjects
       Python renderer that includes a Pythonic Object based interface

       maintainer
              Evan Borgstrom <evan@borgstrom.ca>

       Let’s  take  a  look at how you use pyobjects in a state file. Here’s a quick example that
       ensures the /tmp directory is in the correct state.

           #!pyobjects

           File.managed("/tmp", user='root', group='root', mode='1777')

       Nice and Pythonic!

       By using the “shebang” syntax to switch to the pyobjects renderer we  can  now  write  our
       state  data using an object based interface that should feel at home to python developers.
       You can import any module and  do  anything  that  you’d  like  (with  caution,  importing
       sqlalchemy, django or other large frameworks has not been tested yet). Using the pyobjects
       renderer is exactly the same as using the built-in Python renderer with the exception that
       pyobjects provides you with an object based interface for generating state data.

   Creating state data
       Pyobjects takes care of creating an object for each of the available states on the minion.
       Each state is represented by an object that is the CamelCase version  of  its  name  (i.e.
       File, Service, User, etc), and these objects expose all of their available state functions
       (i.e. File.managed, Service.running, etc).

       The name of the state is split based upon underscores (_), then each part  is  capitalized
       and finally the parts are joined back together.

       Some examples:

       · postgres_user becomes PostgresUser

       · ssh_known_hosts becomes SshKnownHosts

   Context Managers and requisites
       How about something a little more complex. Here we’re going to get into the core of how to
       use pyobjects to write states.

           #!pyobjects

           with Pkg.installed("nginx"):
               Service.running("nginx", enable=True)

               with Service("nginx", "watch_in"):
                   File.managed("/etc/nginx/conf.d/mysite.conf",
                                owner='root', group='root', mode='0444',
                                source='salt://nginx/mysite.conf')

       The objects that are returned from each of the magic method calls are setup to be  used  a
       Python  context managers (with) and when you use them as such all declarations made within
       the scope will automatically use the enclosing state as a requisite!

       The above could have also been written use direct requisite statements as.

           #!pyobjects

           Pkg.installed("nginx")
           Service.running("nginx", enable=True, require=Pkg("nginx"))
           File.managed("/etc/nginx/conf.d/mysite.conf",
                        owner='root', group='root', mode='0444',
                        source='salt://nginx/mysite.conf',
                        watch_in=Service("nginx"))

       You can use the direct requisite statement  for  referencing  states  that  are  generated
       outside of the current file.

           #!pyobjects

           # some-other-package is defined in some other state file
           Pkg.installed("nginx", require=Pkg("some-other-package"))

       The  last  thing  that  direct  requisites  provide  is the ability to select which of the
       SaltStack requisites you want to use (require, require_in, watch, watch_in, use &  use_in)
       when using the requisite as a context manager.

           #!pyobjects

           with Service("my-service", "watch_in"):
               ...

       The  above example would cause all declarations inside the scope of the context manager to
       automatically have their watch_in set to Service("my-service").

   Including and Extending
       To include other states use the include()  function.  It  takes  one  name  per  state  to
       include.

       To extend another state use the extend() function on the name when creating a state.

           #!pyobjects

           include('http', 'ssh')

           Service.running(extend('apache'),
                           watch=[File('/etc/httpd/extra/httpd-vhosts.conf')])

   Importing from other state files
       Like  any Python project that grows you will likely reach a point where you want to create
       reusability in your state tree and share objects between state files, Map Data  (described
       below) is a perfect example of this.

       To  facilitate  this  Python’s  import statement has been augmented to allow for a special
       case when working with a Salt state tree. If you specify a Salt url  (salt://...)  as  the
       target  for importing from then the pyobjects renderer will take care of fetching the file
       for you, parsing it with all of the  pyobjects  features  available  and  then  place  the
       requested objects in the global scope of the template being rendered.

       This  works  for  all  types  of  import statements; import X, from X import Y, and from X
       import Y as Z.

           #!pyobjects

           import salt://myfile.sls
           from salt://something/data.sls import Object
           from salt://something/data.sls import Object as Other

       See the Map Data section for a more practical use.

       Caveats:

       · Imported objects are ALWAYS put into the global scope of your  template,  regardless  of
         where your import statement is.

   Salt object
       In  the  spirit  of the object interface for creating state data pyobjects also provides a
       simple object interface to the __salt__ object.

       A function named salt exists in scope for your sls files and will dispatch its  attributes
       to the __salt__ dictionary.

       The following lines are functionally equivalent:

           #!pyobjects

           ret = salt.cmd.run(bar)
           ret = __salt__['cmd.run'](bar)

   Pillar, grain, mine & config data
       Pyobjects  provides  shortcut  functions  for  calling  pillar.get, grains.get, mine.get &
       config.get on the __salt__ object. This helps  maintain  the  readability  of  your  state
       files.

       Each type of data can be access by a function of the same name: pillar(), grains(), mine()
       and config().

       The following pairs of lines are functionally equivalent:

           #!pyobjects

           value = pillar('foo:bar:baz', 'qux')
           value = __salt__['pillar.get']('foo:bar:baz', 'qux')

           value = grains('pkg:apache')
           value = __salt__['grains.get']('pkg:apache')

           value = mine('os:Fedora', 'network.interfaces', 'grain')
           value = __salt__['mine.get']('os:Fedora', 'network.interfaces', 'grain')

           value = config('foo:bar:baz', 'qux')
           value = __salt__['config.get']('foo:bar:baz', 'qux')

   Map Data
       When building complex states or formulas you often need a way of building up a map of data
       based  on grain data. The most common use of this is tracking the package and service name
       differences between distributions.

       To build map data using pyobjects we provide a class named Map that you use to build  your
       own classes with inner classes for each set of values for the different grain matches.

           #!pyobjects

           class Samba(Map):
               merge = 'samba:lookup'
               # NOTE: priority is new to 2017.7.0
               priority = ('os_family', 'os')

               class Ubuntu:
                   __grain__ = 'os'
                   service = 'smbd'

               class Debian:
                   server = 'samba'
                   client = 'samba-client'
                   service = 'samba'

               class RHEL:
                   __match__ = 'RedHat'
                   server = 'samba'
                   client = 'samba'
                   service = 'smb'

       NOTE:
          By  default,  the  os_family grain will be used as the target for matching. This can be
          overridden by specifying a __grain__ attribute.

          If a __match__ attribute is defined for a given class, then that value will be  matched
          against the targeted grain, otherwise the class name’s value will be be matched.

          Given the above example, the following is true:

          1. Minions  with  an os_family of Debian will be assigned the attributes defined in the
             Debian class.

          2. Minions with an os grain of Ubuntu will be assigned the attributes  defined  in  the
             Ubuntu class.

          3. Minions with an os_family grain of RedHat will be assigned the attributes defined in
             the RHEL class.

          That said, sometimes a minion may match more than one class. For instance, in the above
          example, Ubuntu minions will match both the Debian and Ubuntu classes, since Ubuntu has
          an os_family grain of Debian an an os grain of Ubuntu. As of the 2017.7.0 release,  the
          order  is  dictated  by the order of declaration, with classes defined later overriding
          earlier ones. Additionally, 2017.7.0 adds support for explicitly defining the  ordering
          using an optional attribute called priority.

          Given  the  above  example,  os_family matches will be processed first, with os matches
          processed after. This would have the effect of assigning smbd as the service  attribute
          on  Ubuntu  minions. If the priority item was not defined, or if the order of the items
          in the priority tuple were reversed, Ubuntu minions would have a service  attribute  of
          samba, since os_family matches would have been processed second.

       To  use  this  new  data  you  can  import  it  into  your state file and then access your
       attributes. To access the data in the map you simply access the attribute name on the base
       class  that  is  extending  Map. Assuming the above Map was in the file samba/map.sls, you
       could do the following.

           #!pyobjects

           from salt://samba/map.sls import Samba

           with Pkg.installed("samba", names=[Samba.server, Samba.client]):
               Service.running("samba", name=Samba.service)

       class salt.renderers.pyobjects.PyobjectsModule(name, attrs)
              This provides a wrapper for bare imports.

       salt.renderers.pyobjects.load_states()
              This loads our states into the salt __context__

       salt.renderers.pyobjects.render(template,    saltenv='base',    sls='',    salt_data=True,
       **kwargs)

   salt.renderers.stateconf
       maintainer
              Jack Kuan <kjkuan@gmail.com>

       maturity
              new

       platform
              all

       This  module  provides  a  custom  renderer  that  processes  a salt file with a specified
       templating engine (e.g. Jinja) and a chosen data renderer (e.g. YAML), extracts  arguments
       for any stateconf.set state, and provides the extracted arguments (including Salt-specific
       args,  such  as  require,  etc)  as  template  context.  The  goal  is  to  make   writing
       reusable/configurable/parameterized salt files easier and cleaner.

       To  use  this  renderer,  either set it as the default renderer via the renderer option in
       master/minion’s config, or use the shebang line in each  individual  sls  file,  like  so:
       #!stateconf.  Note,  due to the way this renderer works, it must be specified as the first
       renderer in a render pipeline. That is,  you  cannot  specify  #!mako|yaml|stateconf,  for
       example.  Instead, you specify them as renderer arguments: #!stateconf mako . yaml.

       Here’s a list of features enabled by this renderer.

       · Prefixes  any  state  id (declaration or reference) that starts with a dot (.)  to avoid
         duplicated state ids when the salt file is included by other salt files.

         For example, in the salt://some/file.sls, a state id such as .sls_params will be  turned
         into some.file::sls_params. Example:

            #!stateconf yaml . jinja

            .vim:
              pkg.installed

         Above will be translated into:

            some.file::vim:
              pkg.installed:
                - name: vim

         Notice  how  that if a state under a dot-prefixed state id has no name argument then one
         will be added automatically by using the state id with the leading dot stripped off.

         The leading dot trick can be used with extending state ids as well, so you  can  include
         relatively   and   extend   relatively.   For   example,   when  extending  a  state  in
         salt://some/other_file.sls, e.g.:

            #!stateconf yaml . jinja

            include:
              - .file

            extend:
              .file::sls_params:
                stateconf.set:
                  - name1: something

         Above will be pre-processed into:

            include:
              - some.file

            extend:
              some.file::sls_params:
                stateconf.set:
                  - name1: something

       · Adds a sls_dir context variable that expands to the directory containing  the  rendering
         salt file. So, you can write salt://{{sls_dir}}/... to reference templates files used by
         your salt file.

       · Recognizes the special state function, stateconf.set, that configures a default list  of
         named arguments usable within the template context of the salt file. Example:

            #!stateconf yaml . jinja

            .sls_params:
              stateconf.set:
                - name1: value1
                - name2: value2
                - name3:
                  - value1
                  - value2
                  - value3
                - require_in:
                  - cmd: output

            # --- end of state config ---

            .output:
              cmd.run:
                - name: |
                    echo 'name1={{sls_params.name1}}
                          name2={{sls_params.name2}}
                          name3[1]={{sls_params.name3[1]}}
                    '

         This  even  works  with include + extend so that you can override the default configured
         arguments by including the salt file and then extend the stateconf.set states that  come
         from  the  included salt file. (IMPORTANT: Both the included and the extending sls files
         must use the stateconf renderer for this ``extend`` to work!)

         Notice that the end of configuration marker (# --- end of state config --) is needed  to
         separate  the  use  of  ‘stateconf.set’  form the rest of your salt file. The regex that
         matches such marker can be configured via the stateconf_end_marker option in your master
         or minion config file.

         Sometimes,  it  is  desirable  to  set  a default argument value that’s based on earlier
         arguments in the same stateconf.set. For example, it may be  tempting  to  do  something
         like this:

            #!stateconf yaml . jinja

            .apache:
              stateconf.set:
                - host: localhost
                - port: 1234
                - url: 'http://{{host}}:{{port}}/'

            # --- end of state config ---

            .test:
              cmd.run:
                - name: echo '{{apache.url}}'
                - cwd: /

         However, this won’t work. It can however be worked around like so:

            #!stateconf yaml . jinja

            .apache:
              stateconf.set:
                - host: localhost
                - port: 1234
            {#  - url: 'http://{{host}}:{{port}}/' #}

            # --- end of state config ---
            # {{ apache.setdefault('url', "http://%(host)s:%(port)s/" % apache) }}

            .test:
              cmd.run:
                - name: echo '{{apache.url}}'
                - cwd: /

       · Adds support for relative include and exclude of .sls files. Example:

            #!stateconf yaml . jinja

            include:
              - .apache
              - .db.mysql
              - ..app.django

            exclude:
              - sls: .users

         If  the  above  is  written in a salt file at salt://some/where.sls then it will include
         salt://some/apache.sls, salt://some/db/mysql.sls and salt://app/django.sls, and  exclude
         salt://some/users.ssl. Actually, it does that by rewriting the above include and exclude
         into:

            include:
              - some.apache
              - some.db.mysql
              - app.django

            exclude:
              - sls: some.users

       · Optionally (enabled by default, disable via the -G renderer option, e.g. in the  shebang
         line:  #!stateconf -G), generates a stateconf.set goal state (state id named as .goal by
         default, configurable via the master/minion config  option,  stateconf_goal_state)  that
         requires  all  other  states  in  the  salt file. Note, the .goal state id is subject to
         dot-prefix rename rule mentioned earlier.

         Such goal state is intended to be required by some state in an including salt file.  For
         example,  in  your webapp salt file, if you include a sls file that is supposed to setup
         Tomcat, you might want to make sure that all states in  the  Tomcat  sls  file  will  be
         executed before some state in the webapp sls file.

       · Optionally  (enable  via  the  -o renderer option, e.g. in the shebang line: #!stateconf
         -o), orders the states in a sls file by adding a require requisite to  each  state  such
         that every state requires the state defined just before it. The order of the states here
         is the order they are defined in the sls file. (Note: this feature is only available  if
         your  minions are using Python >= 2.7. For Python2.6, it should also work if you install
         the ordereddict module from PyPI)

         By enabling this feature, you are basically agreeing to author your sls files in  a  way
         that gives up the explicit (or implicit?) ordering imposed by the use of require, watch,
         require_in or watch_in requisites, and instead, you rely on  the  order  of  states  you
         define  in the sls files. This may or may not be a better way for you. However, if there
         are many states defined in a sls file, then it tends to be easier to see the order  they
         will be executed with this feature.

         You  are  still  allowed to use all the requisites, with a few restrictions.  You cannot
         require or watch a state defined after the current state. Similarly,  in  a  state,  you
         cannot  require_in  or  watch_in  a  state  defined  before  it. Breaking any of the two
         restrictions above will result in a  state  loop.  The  renderer  will  check  for  such
         incorrect uses if this feature is enabled.

         Additionally,  names  declarations cannot be used with this feature because the way they
         are compiled into low states make it impossible to guarantee the  order  in  which  they
         will  be  executed.  This is also checked by the renderer. As a workaround for not being
         able to use names, you can achieve the same effect, by generate  your  states  with  the
         template engine available within your sls file.

         Finally,  with  the  use of this feature, it becomes possible to easily make an included
         sls file execute all its states after some state (say, with id X) in the  including  sls
         file.  All you have to do is to make state, X, require_in the first state defined in the
         included sls file.

       When writing sls files with this renderer, one should avoid using what can be defined in a
       name argument of a state as the state’s id. That is, avoid writing states like this:

          /path/to/some/file:
            file.managed:
              - source: salt://some/file

          cp /path/to/some/file file2:
            cmd.run:
              - cwd: /
              - require:
                - file: /path/to/some/file

       Instead, define the state id and the name argument separately for each state. Also, the ID
       should be something meaningful and easy to reference within a requisite (which is  a  good
       habit  anyway,  and  such extra indirection would also makes the sls file easier to modify
       later). Thus, the above states should be written like this:

          add-some-file:
            file.managed:
              - name: /path/to/some/file
              - source: salt://some/file

          copy-files:
            cmd.run:
              - name: cp /path/to/some/file file2
              - cwd: /
              - require:
                - file: add-some-file

       Moreover, when referencing a state from a requisite, you should reference the  state’s  id
       plus  the state name rather than the state name plus its name argument. (Yes, in the above
       example, you can actually require the  file:  /path/to/some/file,  instead  of  the  file:
       add-some-file).  The  reason  is that this renderer will re-write or rename state id’s and
       their references for state id’s prefixed with .. So, if you reference name then there’s no
       way to reliably rewrite such reference.

   salt.renderers.wempy
       salt.renderers.wempy.render(template_file,     saltenv='base',     sls='',     argline='',
       context=None, **kws)
              Render the data passing the functions and grains into the rendering system

              Return type
                     string

   salt.renderers.yaml
   Understanding YAML
       The default renderer for SLS files is the YAML renderer. YAML is a  markup  language  with
       many  powerful  features.  However,  Salt  uses a small subset of YAML that maps over very
       commonly used data structures, like lists and dictionaries. It is  the  job  of  the  YAML
       renderer  to  take the YAML data structure and compile it into a Python data structure for
       use by Salt.

       Though YAML syntax may seem daunting and terse at first, there are only three very  simple
       rules to remember when writing YAML for SLS files.

   Rule One: Indentation
       YAML  uses a fixed indentation scheme to represent relationships between data layers. Salt
       requires that the indentation for each level consists of exactly two spaces.  Do  not  use
       tabs.

   Rule Two: Colons
       Python dictionaries are, of course, simply key-value pairs. Users from other languages may
       recognize this data type as hashes or associative arrays.

       Dictionary keys are represented in YAML as strings terminated by a trailing colon.  Values
       are represented by either a string following the colon, separated by a space:

          my_key: my_value

       In Python, the above maps to:

          {'my_key': 'my_value'}

       Dictionaries can be nested:

          first_level_dict_key:
            second_level_dict_key: value_in_second_level_dict

       And in Python:

          {'first_level_dict_key': {'second_level_dict_key': 'value_in_second_level_dict' }

   Rule Three: Dashes
       To represent lists of items, a single dash followed by a space is used. Multiple items are
       a part of the same list as a function of their having the same level of indentation.

          - list_value_one
          - list_value_two
          - list_value_three

       Lists can be the value of a key-value pair. This is quite common in Salt:

          my_dictionary:
            - list_value_one
            - list_value_two
            - list_value_three

   Reference
       YAML Renderer for Salt

       For YAML usage information see Understanding YAML.

       salt.renderers.yaml.get_yaml_loader(argline)
              Return the ordered dict yaml loader

       salt.renderers.yaml.render(yaml_data, saltenv='base', sls='', argline='', **kws)
              Accepts YAML as a string or as a file object and runs it through the YAML parser.

              Return type
                     A Python data structure

   salt.renderers.yamlex
       YAMLEX renderer is a replacement of the YAML renderer.  It’s 100% YAML  with  a  pinch  of
       Salt magic:

       · All mappings are automatically OrderedDict

       · All strings are automatically str obj

       · data aggregation with !aggregation yaml tag, based on the salt.utils.aggregation module.

       · data aggregation over documents for pillar

       Instructed aggregation within the !aggregation and the !reset tags:

          #!yamlex
          foo: !aggregate first
          foo: !aggregate second
          bar: !aggregate {first: foo}
          bar: !aggregate {second: bar}
          baz: !aggregate 42
          qux: !aggregate default
          !reset qux: !aggregate my custom data

       is roughly equivalent to

          foo: [first, second]
          bar: {first: foo, second: bar}
          baz: [42]
          qux: [my custom data]

   Reference
       salt.renderers.yamlex.render(sls_data, saltenv='base', sls='', **kws)
              Accepts  YAML_EX  as  a  string or as a file object and runs it through the YAML_EX
              parser.

              Return type
                     A Python data structure

USING SALT

       This section describes the fundamental components and concepts that you need to understand
       to use Salt.

   Grains
       Salt  comes  with an interface to derive information about the underlying system.  This is
       called the grains interface, because it presents salt with grains of  information.  Grains
       are  collected for the operating system, domain name, IP address, kernel, OS type, memory,
       and many other system properties.

       The grains interface is made available to Salt modules and components so  that  the  right
       salt minion commands are automatically available on the right systems.

       Grain  data  is  relatively  static, though if system information changes (for example, if
       network settings are changed), or if a new value is assigned to a custom grain, grain data
       is refreshed.

       NOTE:
          Grains resolve to lowercase letters. For example, FOO, and foo target the same grain.

   Listing Grains
       Available grains can be listed by using the ‘grains.ls’ module:

          salt '*' grains.ls

       Grains data can be listed by using the ‘grains.items’ module:

          salt '*' grains.items

   Grains in the Minion Config
       Grains can also be statically assigned within the minion configuration file.  Just add the
       option grains and pass options to it:

          grains:
            roles:
              - webserver
              - memcache
            deployment: datacenter4
            cabinet: 13
            cab_u: 14-15

       Then status data specific to your servers can be retrieved via Salt, or used inside of the
       State  system  for  matching.  It  also makes targeting, in the case of the example above,
       simply based on specific data about your deployment.

   Grains in /etc/salt/grains
       If you do not want to place your custom static grains in the minion config file,  you  can
       also put them in /etc/salt/grains on the minion. They are configured in the same way as in
       the above example, only without a top-level grains: key:

          roles:
            - webserver
            - memcache
          deployment: datacenter4
          cabinet: 13
          cab_u: 14-15

       NOTE:
          Grains in /etc/salt/grains are ignored if you specify the same  grains  in  the  minion
          config.

       NOTE:
          Grains  are  static,  and  since  they  are  not often changed, they will need a grains
          refresh  when  they  are  updated.  You  can  do   this   by   calling:   salt   minion
          saltutil.refresh_modules

       NOTE:
          You  can  equally  configure static grains for Proxy Minions.  As multiple Proxy Minion
          processes can run on the same machine, you need to index the files using the Minion ID,
          under  /etc/salt/proxy.d/<minion  ID>/grains.   For  example,  the grains for the Proxy
          Minion router1 can be defined under /etc/salt/proxy.d/router1/grains, while the  grains
          for the Proxy Minion switch7 can be put in /etc/salt/proxy.d/switch7/grains.

   Matching Grains in the Top File
       With  correctly  configured  grains  on  the Minion, the top file used in Pillar or during
       Highstate can be made very efficient. For example, consider the following configuration:

          'roles:webserver':
            - match: grain
            - state0

          'roles:memcache':
            - match: grain
            - state1
            - state2

       For this example to work, you would need to have defined the grain role  for  the  minions
       you wish to match.

   Writing Grains
       The grains are derived by executing all of the “public” functions (i.e. those which do not
       begin with an underscore) found in the modules located in the  Salt’s  core  grains  code,
       followed  by  those  in  any  custom grains modules. The functions in a grains module must
       return a Python dictionary, where the dictionary keys are the names of  grains,  and  each
       key’s value is that value for that grain.

       Custom  grains  modules should be placed in a subdirectory named _grains located under the
       file_roots  specified  by  the  master  config   file.   The   default   path   would   be
       /srv/salt/_grains.  Custom  grains  modules  will  be  distributed  to  the  minions  when
       state.highstate is run, or by  executing  the  saltutil.sync_grains  or  saltutil.sync_all
       functions.

       Grains  modules  are easy to write, and (as noted above) only need to return a dictionary.
       For example:

          def yourfunction():
               # initialize a grains dictionary
               grains = {}
               # Some code for logic that sets grains like
               grains['yourcustomgrain'] = True
               grains['anothergrain'] = 'somevalue'
               return grains

       The name of the function does not matter and will not factor into the grains data at  all;
       only the keys/values returned become part of the grains.

   When to Use a Custom Grain
       Before  adding  new grains, consider what the data is and remember that grains should (for
       the most part) be static data.

       If the data is something that is likely to change, consider using Pillar or  an  execution
       module  instead.  If  it’s  a  simple  set  of key/value pairs, pillar is a good match. If
       compiling the information  requires  that  system  commands  be  run,  then  putting  this
       information in an execution module is likely a better idea.

       Good  candidates  for grains are data that is useful for targeting minions in the top file
       or the Salt CLI. The name and data structure of the grain should be  designed  to  support
       many  platforms,  operating  systems  or  applications.  Also,  keep  in  mind  that Jinja
       templating in Salt supports referencing pillar data as well  as  invoking  functions  from
       execution  modules, so there’s no need to place information in grains to make it available
       to Jinja templates. For example:

          ...
          ...
          {{ salt['module.function_name']('argument_1', 'argument_2') }}
          {{ pillar['my_pillar_key'] }}
          ...
          ...

       WARNING:
          Custom grains will not be available in the top file until after the first highstate. To
          make  custom  grains  available on a minion’s first highstate, it is recommended to use
          this example to ensure that the custom grains are synced when the minion starts.

   Loading Custom Grains
       If you have multiple functions specifying grains that are called from a main function,  be
       sure to prepend grain function names with an underscore. This prevents Salt from including
       the loaded grains from the grain functions in the final grain data structure. For example,
       consider this custom grain file:

          #!/usr/bin/env python
          def _my_custom_grain():
              my_grain = {'foo': 'bar', 'hello': 'world'}
              return my_grain

          def main():
              # initialize a grains dictionary
              grains = {}
              grains['my_grains'] = _my_custom_grain()
              return grains

       The output of this example renders like so:

          # salt-call --local grains.items
          local:
              ----------
              <Snipped for brevity>
              my_grains:
                  ----------
                  foo:
                      bar
                  hello:
                      world

       However,  if  you  don’t  prepend  the  my_custom_grain  function  with an underscore, the
       function will be rendered twice by Salt in the items output: once for the  my_custom_grain
       call itself, and again when it is called in the main function:

          # salt-call --local grains.items
          local:
          ----------
              <Snipped for brevity>
              foo:
                  bar
              <Snipped for brevity>
              hello:
                  world
              <Snipped for brevity>
              my_grains:
                  ----------
                  foo:
                      bar
                  hello:
                      world

   Precedence
       Core  grains  can  be  overridden  by custom grains. As there are several ways of defining
       custom grains, there is an order of precedence which should be kept in mind when  defining
       them. The order of evaluation is as follows:

       1. Core grains.

       2. Custom grains in /etc/salt/grains.

       3. Custom grains in /etc/salt/minion.

       4. Custom grain modules in _grains directory, synced to minions.

       Each  successive  evaluation  overrides the previous ones, so any grains defined by custom
       grains modules synced to minions that have the same name as a  core  grain  will  override
       that  core  grain.  Similarly,  grains from /etc/salt/minion override both core grains and
       custom grain modules, and grains in _grains will override any grains of the same name.

   Examples of Grains
       The core module in the grains package is where the main grains  are  loaded  by  the  Salt
       minion and provides the principal example of how to write grains:

       https://github.com/saltstack/salt/blob/develop/salt/grains/core.py

   Syncing Grains
       Syncing  grains  can  be  done  a  number  of  ways,  they  are  automatically synced when
       state.highstate is called, or (as noted above) the  grains  can  be  manually  synced  and
       reloaded by calling the saltutil.sync_grains or saltutil.sync_all functions.

       NOTE:
          When  the  grains_cache  is  set to False, the grains dictionary is built and stored in
          memory on the minion. Every time the minion restarts or saltutil.refresh_grains is run,
          the grain dictionary is rebuilt from scratch.

   Storing Static Data in the Pillar
       Pillar is an interface for Salt designed to offer global values that can be distributed to
       minions. Pillar data is managed in a similar way as the Salt State Tree.

       Pillar was added to Salt in version 0.9.8

       NOTE:
          Storing sensitive data

          Pillar data is compiled on the master. Additionally, pillar data for a given minion  is
          only  accessible  by  the  minion for which it is targeted in the pillar configuration.
          This makes pillar useful for storing sensitive data specific to a particular minion.

   Declaring the Master Pillar
       The Salt Master server maintains a pillar_roots setup that matches the  structure  of  the
       file_roots  used  in  the  Salt file server. Like file_roots, the pillar_roots option maps
       environments to directories. The pillar data is then mapped to minions based  on  matchers
       in  a  top  file which is laid out in the same way as the state top file. Salt pillars can
       use the same matcher types as the standard top file.

       conf_master:pillar_roots is configured just like file_roots.  For example:

          pillar_roots:
            base:
              - /srv/pillar

       This example configuration declares that the base  environment  will  be  located  in  the
       /srv/pillar directory. It must not be in a subdirectory of the state tree.

       The  top  file  used  matches  the  name of the top file used for States, and has the same
       structure:

       /srv/pillar/top.sls

          base:
            '*':
              - packages

       In the above top file, it is declared that in the base environment, the glob matching  all
       minions  will  have the pillar data found in the packages pillar available to it. Assuming
       the pillar_roots value of /srv/pillar taken from  above,  the  packages  pillar  would  be
       located at /srv/pillar/packages.sls.

       Any  number  of  matchers  can  be  added to the base environment. For example, here is an
       expanded version of the Pillar top file stated above:

       /srv/pillar/top.sls:

          base:
            '*':
              - packages
            'web*':
              - vim

       In  this  expanded  top  file,  minions  that  match  web*  will  have   access   to   the
       /srv/pillar/packages.sls file, as well as the /srv/pillar/vim.sls file.

       Another  example  shows  how  to use other standard top matching types to deliver specific
       salt pillar data to minions with different properties.

       Here is an example using the grains matcher to target  pillars  to  minions  by  their  os
       grain:

          dev:
            'os:Debian':
              - match: grain
              - servers

       /srv/pillar/packages.sls

          {% if grains['os'] == 'RedHat' %}
          apache: httpd
          git: git
          {% elif grains['os'] == 'Debian' %}
          apache: apache2
          git: git-core
          {% endif %}

          company: Foo Industries

       IMPORTANT:
          See Is Targeting using Grain Data Secure? for important security information.

       The  above pillar sets two key/value pairs. If a minion is running RedHat, then the apache
       key is set to httpd and the git key is set to the value of git. If the minion  is  running
       Debian,  those  values  are changed to apache2 and git-core respectively. All minions that
       have this pillar targeting to them via a top file will have the  key  of  company  with  a
       value of Foo Industries.

       Consequently  this  data  can be used from within modules, renderers, State SLS files, and
       more via the shared pillar dictionary:

          apache:
            pkg.installed:
              - name: {{ pillar['apache'] }}

          git:
            pkg.installed:
              - name: {{ pillar['git'] }}

       Finally, the above states can utilize the values provided to them via Pillar.  All  pillar
       values  targeted  to  a  minion  are available via the ‘pillar’ dictionary. As seen in the
       above example, Jinja substitution can then be utilized to access the keys  and  values  in
       the Pillar dictionary.

       Note  that you cannot just list key/value-information in top.sls. Instead, target a minion
       to a pillar file and then list the keys and values in the pillar. Here is an  example  top
       file that illustrates this point:

          base:
            '*':
               - common_pillar

       And the actual pillar file at ‘/srv/pillar/common_pillar.sls’:

          foo: bar
          boo: baz

       NOTE:
          When  working  with multiple pillar environments, assuming that each pillar environment
          has its own top file, the jinja placeholder {{ saltenv }} can be used in place  of  the
          environment name:

              {{ saltenv }}:
                '*':
                   - common_pillar

          Yes, this is {{ saltenv }}, and not {{ pillarenv }}. The reason for this is because the
          Pillar top files are parsed using some of the same code which  parses  top  files  when
          running states, so the pillar environment takes the place of {{ saltenv }} in the jinja
          context.

   Dynamic Pillar Environments
       If environment __env__ is  specified  in  pillar_roots,  all  environments  that  are  not
       explicitly specified in pillar_roots will map to the directories from __env__. This allows
       one to use dynamic git branch based environments for  state/pillar  files  with  the  same
       file-based pillar applying to all environments. For example:

          pillar_roots:
            __env__:
              - /srv/pillar

          ext_pillar:
            - git:
              - __env__ https://example.com/git-pillar.git

       New in version 2017.7.5,2018.3.1.

   Pillar Namespace Flattening
       The  separate pillar SLS files all merge down into a single dictionary of key-value pairs.
       When the same key is defined in multiple SLS files, this can result in unexpected behavior
       if care is not taken to how the pillar SLS files are laid out.

       For example, given a top.sls containing the following:

          base:
            '*':
              - packages
              - services

       with packages.sls containing:

          bind: bind9

       and services.sls containing:

          bind: named

       Then  a  request  for  the bind pillar key will only return named. The bind9 value will be
       lost, because services.sls was evaluated later.

       NOTE:
          Pillar files are applied in the order they are  listed  in  the  top  file.   Therefore
          conflicting  keys will be overwritten in a ‘last one wins’ manner!  For example, in the
          above scenario conflicting key values in services  will  overwrite  those  in  packages
          because it’s at the bottom of the list.

       It  can  be  better  to  structure  your pillar files with more hierarchy. For example the
       package.sls file could be configured like so:

          packages:
            bind: bind9

       This would make the packages pillar key a nested dictionary containing a bind key.

   Pillar Dictionary Merging
       If the same pillar key is defined in multiple pillar SLS files, and the keys in both files
       refer to nested dictionaries, then the content from these dictionaries will be recursively
       merged.

       For example, keeping the top.sls the same,  assume  the  following  modifications  to  the
       pillar SLS files:

       packages.sls:

          bind:
            package-name: bind9
            version: 9.9.5

       services.sls:

          bind:
            port: 53
            listen-on: any

       The resulting pillar dictionary will be:

          $ salt-call pillar.get bind
          local:
              ----------
              listen-on:
                  any
              package-name:
                  bind9
              port:
                  53
              version:
                  9.9.5

       Since  both pillar SLS files contained a bind key which contained a nested dictionary, the
       pillar dictionary’s bind key contains the combined contents of both SLS files’ bind keys.

   Including Other Pillars
       New in version 0.16.0.

       Pillar SLS files may include other pillar files, similar to State files. Two syntaxes  are
       available for this purpose. The simple form simply includes the additional pillar as if it
       were part of the same file:

          include:
            - users

       The full include form allows two additional  options  –  passing  default  values  to  the
       templating  engine  for the included pillar file as well as an optional key under which to
       nest the results of the included pillar:

          include:
            - users:
                defaults:
                    sudo: ['bob', 'paul']
                key: users

       With this form, the included file (users.sls) will be nested within the ‘users’ key of the
       compiled  pillar.  Additionally, the ‘sudo’ value will be available as a template variable
       to users.sls.

   In-Memory Pillar Data vs. On-Demand Pillar Data
       Since compiling pillar data is computationally expensive, the minion will maintain a  copy
       of the pillar data in memory to avoid needing to ask the master to recompile and send it a
       copy of the pillar data each time pillar data is requested. This in-memory pillar data  is
       what is returned by the pillar.item, pillar.get, and pillar.raw functions.

       Also,  for  those  writing  custom  execution  modules, or contributing to Salt’s existing
       execution modules, the in-memory  pillar  data  is  available  as  the  __pillar__  dunder
       dictionary.

       The  in-memory  pillar  data  is generated on minion start, and can be refreshed using the
       saltutil.refresh_pillar function:

          salt '*' saltutil.refresh_pillar

       This function triggers the minion to asynchronously refresh the in-memory pillar data  and
       will always return None.

       In  contrast  to in-memory pillar data, certain actions trigger pillar data to be compiled
       to ensure that the most up-to-date pillar data is available. These actions include:

       · Running states

       · Running pillar.items

       Performing these actions will not refresh the in-memory pillar data. So, if pillar data is
       modified,  and  then  states  are  run,  the  states will see the updated pillar data, but
       pillar.item, pillar.get, and pillar.raw will not see  this  data  unless  refreshed  using
       saltutil.refresh_pillar.

   How Pillar Environments Are Handled
       When  multiple  pillar  environments are used, the default behavior is for the pillar data
       from all environments to be merged together. The pillar dictionary will therefore  contain
       keys from all configured environments.

       The pillarenv minion config option can be used to force the minion to only consider pillar
       configuration from a single environment. This can be useful in cases where  one  needs  to
       run  states  with  alternate  pillar  data,  either in a testing/QA environment or to test
       changes to the pillar data before pushing them live.

       For example, assume that the following is set in the minion config file:

          pillarenv: base

       This would cause that minion to ignore all other pillar  environments  besides  base  when
       compiling the in-memory pillar data. Then, when running states, the pillarenv CLI argument
       can be used to override the minion’s pillarenv config value:

          salt '*' state.apply mystates pillarenv=testing

       The above command will run the states  with  pillar  data  sourced  exclusively  from  the
       testing environment, without modifying the in-memory pillar data.

       NOTE:
          When running states, the pillarenv CLI option does not require a pillarenv option to be
          set in the minion config file. When pillarenv is left unset,  as  mentioned  above  all
          configured environments will be combined. Running states with pillarenv=testing in this
          case would still restrict the states’ pillar data to just that of  the  testing  pillar
          environment.

       Starting  in  the  2017.7.0  release, it is possible to pin the pillarenv to the effective
       saltenv, using the pillarenv_from_saltenv minion config option. When this is set to  True,
       if  a  specific  saltenv is specified when running states, the pillarenv will be the same.
       This essentially makes the following two commands equivalent:

          salt '*' state.apply mystates saltenv=dev
          salt '*' state.apply mystates saltenv=dev pillarenv=dev

       However, if a pillarenv is specified, it will override this behavior.  So,  the  following
       command will use the qa pillar environment but source the SLS files from the dev saltenv:

          salt '*' state.apply mystates saltenv=dev pillarenv=qa

       So,  if  a  pillarenv  is  set  in  the minion config file, pillarenv_from_saltenv will be
       ignored,   and   passing   a   pillarenv   on   the   CLI   will   temporarily    override
       pillarenv_from_saltenv.

   Viewing Pillar Data
       To  view  pillar  data,  use  the  pillar  execution  module. This module includes several
       functions, each of them with their own use. These functions include:

       · pillar.item - Retrieves the value of one or more keys from the in-memory pillar datj.

       · pillar.items - Compiles a fresh pillar dictionary and returns it, leaving the  in-memory
         pillar data untouched. If pillar keys are passed to this function however, this function
         acts like pillar.item and returns their values from the in-memory pillar data.

       · pillar.raw - Like pillar.items, it returns the entire pillar dictionary,  but  from  the
         in-memory pillar data instead of compiling fresh pillar data.

       · pillar.get - Described in detail below.

   The pillar.get Function
       New in version 0.14.0.

       The pillar.get function works much in the same way as the get method in a python dict, but
       with an enhancement: nested dictonaries can be traversed using a colon as a delimiter.

       If a structure like this is in pillar:

          foo:
            bar:
              baz: qux

       Extracting it from the raw pillar in an sls formula or file template is done this way:

          {{ pillar['foo']['bar']['baz'] }}

       Now, with the new pillar.get function the data can be safely gathered and a default can be
       set, allowing the template to fall back if the value is not available:

          {{ salt['pillar.get']('foo:bar:baz', 'qux') }}

       This makes handling nested structures much easier.

       NOTE:
          pillar.get() vs salt['pillar.get']()

          It  should  be  noted that within templating, the pillar variable is just a dictionary.
          This means that calling pillar.get() inside of a template will  just  use  the  default
          dictionary  .get() function which does not include the extra : delimiter functionality.
          It must be called using the above syntax (salt['pillar.get']('foo:bar:baz', 'qux'))  to
          get the salt function, instead of the default dictionary behavior.

   Setting Pillar Data at the Command Line
       Pillar data can be set at the command line like the following example:

          salt '*' state.apply pillar='{"cheese": "spam"}'

       This will add a pillar key of cheese with its value set to spam.

       NOTE:
          Be  aware  that  when  sending  sensitive  data via pillar on the command-line that the
          publication containing that data will be received  by  all  minions  and  will  not  be
          restricted  to  the  targeted  minions.  This  may represent a security concern in some
          cases.

   Pillar Encryption
       Salt’s renderer system can be used to decrypt pillar data. This allows for pillar items to
       be stored in an encrypted state, and decrypted during pillar compilation.

   Encrypted Pillar SLS
       New in version 2017.7.0.

       Consider the following pillar SLS file:

          secrets:
            vault:
              foo: |

                -----BEGIN PGP MESSAGE-----
                hQEMAw2B674HRhwSAQgAhTrN8NizwUv/VunVrqa4/X8t6EUulrnhKcSeb8sZS4th
                W1Qz3K2NjL4lkUHCQHKZVx/VoZY7zsddBIFvvoGGfj8+2wjkEDwFmFjGE4DEsS74
                ZLRFIFJC1iB/O0AiQ+oU745skQkU6OEKxqavmKMrKo3rvJ8ZCXDC470+i2/Hqrp7
                +KWGmaDOO422JaSKRm5D9bQZr9oX7KqnrPG9I1+UbJyQSJdsdtquPWmeIpamEVHb
                VMDNQRjSezZ1yKC4kCWm3YQbBF76qTHzG1VlLF5qOzuGI9VkyvlMaLfMibriqY73
                zBbPzf6Bkp2+Y9qyzuveYMmwS4sEOuZL/PetqisWe9JGAWD/O+slQ2KRu9hNww06
                KMDPJRdyj5bRuBVE4hHkkP23KrYr7SuhW2vpe7O/MvWEJ9uDNegpMLhTWruGngJh
                iFndxegN9w==
                =bAuo
                -----END PGP MESSAGE-----
              bar: this was unencrypted already
              baz: |

                -----BEGIN PGP MESSAGE-----
                hQEMAw2B674HRhwSAQf+Ne+IfsP2IcPDrUWct8sTJrga47jQvlPCmO+7zJjOVcqz
                gLjUKvMajrbI/jorBWxyAbF+5E7WdG9WHHVnuoywsyTB9rbmzuPqYCJCe+ZVyqWf
                9qgJ+oUjcvYIFmH3h7H68ldqbxaAUkAOQbTRHdr253wwaTIC91ZeX0SCj64HfTg7
                Izwk383CRWonEktXJpientApQFSUWNeLUWagEr/YPNFA3vzpPF5/Ia9X8/z/6oO2
                q+D5W5mVsns3i2HHbg2A8Y+pm4TWnH6mTSh/gdxPqssi9qIrzGQ6H1tEoFFOEq1V
                kJBe0izlfudqMq62XswzuRB4CYT5Iqw1c97T+1RqENJCASG0Wz8AGhinTdlU5iQl
                JkLKqBxcBz4L70LYWyHhYwYROJWjHgKAywX5T67ftq0wi8APuZl9olnOkwSK+wrY
                1OZi
                =7epf
                -----END PGP MESSAGE-----
              qux:
                - foo
                - bar
                - |

                  -----BEGIN PGP MESSAGE-----
                  hQEMAw2B674HRhwSAQgAg1YCmokrweoOI1c9HO0BLamWBaFPTMblOaTo0WJLZoTS
                  ksbQ3OJAMkrkn3BnnM/djJc5C7vNs86ZfSJ+pvE8Sp1Rhtuxh25EKMqGOn/SBedI
                  gR6N5vGUNiIpG5Tf3DuYAMNFDUqw8uY0MyDJI+ZW3o3xrMUABzTH0ew+Piz85FDA
                  YrVgwZfqyL+9OQuu6T66jOIdwQNRX2NPFZqvon8liZUPus5VzD8E5cAL9OPxQ3sF
                  f7/zE91YIXUTimrv3L7eCgU1dSxKhhfvA2bEUi+AskMWFXFuETYVrIhFJAKnkFmE
                  uZx+O9R9hADW3hM5hWHKH9/CRtb0/cC84I9oCWIQPdI+AaPtICxtsD2N8Q98hhhd
                  4M7I0sLZhV+4ZJqzpUsOnSpaGyfh1Zy/1d3ijJi99/l+uVHuvmMllsNmgR+ZTj0=
                  =LrCQ

                  -----END PGP MESSAGE-----
       When the pillar data is compiled, the results will be decrypted:

          # salt myminion pillar.items
          myminion:
              ----------
              secrets:
                  ----------
                  vault:
                      ----------
                      bar:
                          this was unencrypted already
                      baz:
                          rosebud
                      foo:
                          supersecret
                      qux:
                          - foo
                          - bar
                          - baz

       Salt  must  be  told  what  portions of the pillar data to decrypt. This is done using the
       decrypt_pillar config option:

          decrypt_pillar:
            - 'secrets:vault': gpg

       The notation used to specify the pillar item(s) to be decrypted is the  same  as  the  one
       used in pillar.get function.

       If a different delimiter is needed, it can be specified using the decrypt_pillar_delimiter
       config option:

          decrypt_pillar:
            - 'secrets|vault': gpg

          decrypt_pillar_delimiter: '|'

       The name of the renderer used to decrypt a given pillar item can be omitted, and if so  it
       will  fall  back to the value specified by the decrypt_pillar_default config option, which
       defaults to gpg.  So, the first example above could be rewritten as:

          decrypt_pillar:
            - 'secrets:vault'

   Encrypted Pillar Data on the CLI
       New in version 2016.3.0.

       The following functions support passing pillar data on the CLI via the pillar argument:

       · pillar.items

       · state.apply

       · state.highstate

       · state.sls

       Triggerring decryption of this CLI pillar data can be done in one of two ways:

       1. Using the pillar_enc argument:

             # salt myminion pillar.items pillar_enc=gpg pillar='{foo: "-----BEGIN PGP MESSAGE-----\n\nhQEMAw2B674HRhwSAQf+OvPqEdDoA2fk15I5dYUTDoj1yf/pVolAma6iU4v8Zixn\nRDgWsaAnFz99FEiFACsAGDEFdZaVOxG80T0Lj+PnW4pVy0OXmXHnY2KjV9zx8FLS\nQxfvmhRR4t23WSFybozfMm0lsN8r1vfBBjbK+A72l0oxN78d1rybJ6PWNZiXi+aC\nmqIeunIbAKQ21w/OvZHhxH7cnIiGQIHc7N9nQH7ibyoKQzQMSZeilSMGr2abAHun\nmLzscr4wKMb+81Z0/fdBfP6g3bLWMJga3hSzSldU9ovu7KR8rDJI1qOlENj3Wm8C\nwTpDOB33kWIKMqiAjY3JFtb5MCHrafyggwQL7cX1+tI+AbSO6kZpbcDfzetb77LZ\nxc5NWnnGK4pGoqq4MAmZshw98RpecSHKMosto2gtiuWCuo9Zn5cV/FbjZ9CTWrQ=\n=0hO/\n-----END PGP MESSAGE-----"}'

          The newlines in this example are specified using a literal \n. Newlines can be replaced
          with a literal \n using sed:

             $ echo -n bar | gpg --armor --trust-model always --encrypt -r user@domain.tld | sed ':a;N;$!ba;s/\n/\\n/g'

          NOTE:
             Using  pillar_enc  will  perform  the decryption minion-side, so for this to work it
             will be necessary to set up the keyring in /etc/salt/gpgkeys on the minion  just  as
             one  would typically do on the master. The easiest way to do this is to first export
             the keys from the master:

                 # gpg --homedir /etc/salt/gpgkeys --export-secret-key -a user@domain.tld >/tmp/keypair.gpg

             Then, copy the file to the minion, setup the keyring, and import:

                 # mkdir -p /etc/salt/gpgkeys
                 # chmod 0700 /etc/salt/gpgkeys
                 # gpg --homedir /etc/salt/gpgkeys --list-keys
                 # gpg --homedir /etc/salt/gpgkeys --import --allow-secret-key-import keypair.gpg

             The --list-keys command is run create a keyring in the newly-created directory.

          Pillar data which is decrypted minion-side will still be securely  transferred  to  the
          master,  since  the  data sent between minion and master is encrypted with the master’s
          public key.

       2. Use the decrypt_pillar option. This is less flexible in that the pillar key  passed  on
          the  CLI  must  be pre-configured on the master, but it doesn’t require a keyring to be
          setup on the minion. One other caveat to this method is that pillar decryption  on  the
          master  happens at the end of pillar compilation, so if the encrypted pillar data being
          passed on the CLI needs  to  be  referenced  by  pillar  or  ext_pillar  during  pillar
          compilation, it must be decrypted minion-side.

   Adding New Renderers for Decryption
       Those  looking  to add new renderers for decryption should look at the gpg renderer for an
       example of how to do so. The function that performs the decryption should be recursive and
       be able to traverse a mutable type such as a dictionary, and modify the values in-place.

       Once  the  renderer  has been written, decrypt_pillar_renderers should be modified so that
       Salt allows it to be used for decryption.

       If the renderer is being submitted upstream to the Salt project, the  renderer  should  be
       added in salt/renderers/. Additionally, the following should be done:

       · Both  occurrences  of  decrypt_pillar_renderers  in  salt/config/__init__.py  should  be
         updated to include the name of the new renderer so that it is included  in  the  default
         value for this config option.

       · The  documentation  for  the decrypt_pillar_renderers config option in the master config
         file and minion config file should be updated to show the correct new default value.

       · The commented example for the  decrypt_pillar_renderers  config  option  in  the  master
         config template should be updated to show the correct new default value.

   Master Config in Pillar
       For  convenience the data stored in the master configuration file can be made available in
       all minion’s pillars. This makes global configuration of services and  systems  very  easy
       but  may  not  be  desired  if  sensitive data is stored in the master configuration. This
       option is disabled by default.

       To enable the master config from being added to the pillar set pillar_opts to True in  the
       minion config file:

          pillar_opts: True

   Minion Config in Pillar
       Minion  configuration  options  can be set on pillars. Any option that you want to modify,
       should be in the first level of the pillars, in the same way you set the  options  in  the
       config  file.  For  example, to configure the MySQL root password to be used by MySQL Salt
       execution module, set the following pillar variable:

          mysql.pass: hardtoguesspassword

   Master Provided Pillar Error
       By default if there is an error rendering a pillar,  the  detailed  error  is  hidden  and
       replaced with:

          Rendering SLS 'my.sls' failed. Please see master log for details.

       The  error  is protected because it’s possible to contain templating data which would give
       that minion information it shouldn’t know, like a password!

       To have the master provide the detailed error that could potentially carry protected  data
       set pillar_safe_render_error to False:

          pillar_safe_render_error: False

   Pillar Walkthrough
       NOTE:
          This  walkthrough  assumes  that  the  reader  has  already  completed the initial Salt
          walkthrough.

       Pillars are tree-like structures of data defined on the Salt Master and passed through  to
       minions.  They  allow confidential, targeted data to be securely sent only to the relevant
       minion.

       NOTE:
          Grains and Pillar are sometimes confused, just remember that Grains are  data  about  a
          minion  which is stored or generated from the minion.  This is why information like the
          OS and CPU type are found in Grains.  Pillar is information  about  a  minion  or  many
          minions stored or generated on the Salt Master.

       Pillar data is useful for:

       Highly Sensitive Data:
              Information  transferred  via  pillar  is  guaranteed  to  only be presented to the
              minions  that  are  targeted,  making  Pillar  suitable   for   managing   security
              information, such as cryptographic keys and passwords.

       Minion Configuration:
              Minion  modules  such  as the execution modules, states, and returners can often be
              configured via data stored in pillar.

       Variables:
              Variables which need to be assigned to specific minions or groups of minions can be
              defined in pillar and then accessed inside sls formulas and template files.

       Arbitrary Data:
              Pillar  can  contain  any basic data structure in dictionary format, so a key/value
              store can be defined making it easy to iterate  over  a  group  of  values  in  sls
              formulas.

       Pillar is therefore one of the most important systems when using Salt. This walkthrough is
       designed to get a simple Pillar up and running in a few minutes and then to dive into  the
       capabilities of Pillar and where the data is available.

   Setting Up Pillar
       The pillar is already running in Salt by default. To see the minion’s pillar data:

          salt '*' pillar.items

       NOTE:
          Prior  to  version  0.16.2,  this  function is named pillar.data. This function name is
          still supported for backwards compatibility.

       By default, the contents of the master configuration file are not loaded into  pillar  for
       all minions. This default is stored in the pillar_opts setting, which defaults to False.

       The  contents  of  the  master  configuration  file can be made available to minion pillar
       files. This makes global configuration of services and systems very easy,  but  note  that
       this  may  not  be  desired  or  appropriate  if  sensitive data is stored in the master’s
       configuration file. To enable the master configuration file to be available to a  minion’s
       pillar files, set pillar_opts to True in the minion configuration file.

       Similar  to  the state tree, the pillar is comprised of sls files and has a top file.  The
       default location for the pillar is in /srv/pillar.

       NOTE:
          The pillar location can be configured via the pillar_roots  option  inside  the  master
          configuration  file.  It must not be in a subdirectory of the state tree or file_roots.
          If the pillar is under file_roots, any pillar targeting can be bypassed by minions.

       To start setting up the pillar, the /srv/pillar directory needs to be present:

          mkdir /srv/pillar

       Now create a simple top file, following the same format as the top file used for states:

       /srv/pillar/top.sls:

          base:
            '*':
              - data

       This top file associates the data.sls file to all minions.  Now  the  /srv/pillar/data.sls
       file needs to be populated:

       /srv/pillar/data.sls:

          info: some data

       To  ensure  that the minions have the new pillar data, issue a command to them asking that
       they fetch their pillars from the master:

          salt '*' saltutil.refresh_pillar

       Now that the minions have the new pillar, it can be retrieved:

          salt '*' pillar.items

       The key info should now appear in the returned pillar data.

   More Complex Data
       Unlike states, pillar files do not need to define formulas.  This  example  sets  up  user
       data with a UID:

       /srv/pillar/users/init.sls:

          users:
            thatch: 1000
            shouse: 1001
            utahdave: 1002
            redbeard: 1003

       NOTE:
          The  same  directory  lookups  that  exist  in  states  exist  in  pillar,  so the file
          users/init.sls can be referenced with users in the top file.

       The top file will need to be updated to include this sls file:

       /srv/pillar/top.sls:

          base:
            '*':
              - data
              - users

       Now the data will be available to the minions. To use the pillar data in a state, you  can
       use Jinja:

       /srv/salt/users/init.sls

          {% for user, uid in pillar.get('users', {}).items() %}
          {{user}}:
            user.present:
              - uid: {{uid}}
          {% endfor %}

       This  approach allows for users to be safely defined in a pillar and then the user data is
       applied in an sls file.

   Parameterizing States With Pillar
       Pillar data can be accessed in state files to customise  behavior  for  each  minion.  All
       pillar  (and  grain)  data  applicable  to each minion is substituted into the state files
       through templating before being run. Typical uses include setting directories  appropriate
       for the minion and skipping states that don’t apply.

       A  simple  example  is  to  set up a mapping of package names in pillar for separate Linux
       distributions:

       /srv/pillar/pkg/init.sls:

          pkgs:
            {% if grains['os_family'] == 'RedHat' %}
            apache: httpd
            vim: vim-enhanced
            {% elif grains['os_family'] == 'Debian' %}
            apache: apache2
            vim: vim
            {% elif grains['os'] == 'Arch' %}
            apache: apache
            vim: vim
            {% endif %}

       The new pkg sls needs to be added to the top file:

       /srv/pillar/top.sls:

          base:
            '*':
              - data
              - users
              - pkg

       Now the minions will auto map values based on respective operating systems inside  of  the
       pillar, so sls files can be safely parameterized:

       /srv/salt/apache/init.sls:

          apache:
            pkg.installed:
              - name: {{ pillar['pkgs']['apache'] }}

       Or, if no pillar is available a default can be set as well:

       NOTE:
          The function pillar.get used in this example was added to Salt in version 0.14.0

       /srv/salt/apache/init.sls:

          apache:
            pkg.installed:
              - name: {{ salt['pillar.get']('pkgs:apache', 'httpd') }}

       In  the  above  example,  if  the  pillar value pillar['pkgs']['apache'] is not set in the
       minion’s pillar, then the default of httpd will be used.

       NOTE:
          Under the hood, pillar is just a Python dict, so Python dict methods such  as  get  and
          items can be used.

   Pillar Makes Simple States Grow Easily
       One  of  the  design  goals of pillar is to make simple sls formulas easily grow into more
       flexible formulas without refactoring or complicating the states.

       A simple formula:

       /srv/salt/edit/vim.sls:

          vim:
            pkg.installed: []

          /etc/vimrc:
            file.managed:
              - source: salt://edit/vimrc
              - mode: 644
              - user: root
              - group: root
              - require:
                - pkg: vim

       Can be easily transformed into a powerful, parameterized formula:

       /srv/salt/edit/vim.sls:

          vim:
            pkg.installed:
              - name: {{ pillar['pkgs']['vim'] }}

          /etc/vimrc:
            file.managed:
              - source: {{ pillar['vimrc'] }}
              - mode: 644
              - user: root
              - group: root
              - require:
                - pkg: vim

       Where the vimrc source location can now be changed via pillar:

       /srv/pillar/edit/vim.sls:

          {% if grains['id'].startswith('dev') %}
          vimrc: salt://edit/dev_vimrc
          {% elif grains['id'].startswith('qa') %}
          vimrc: salt://edit/qa_vimrc
          {% else %}
          vimrc: salt://edit/vimrc
          {% endif %}

       Ensuring that the right vimrc is sent out to the correct minions.

       The pillar top file must include a reference to the new sls pillar file:

       /srv/pillar/top.sls:

          base:
            '*':
              - pkg
              - edit.vim

   Setting Pillar Data on the Command Line
       Pillar   data   can   be   set   on   the   command   line   when   running    state.apply
       <salt.modules.state.apply_() like so:

          salt '*' state.apply pillar='{"foo": "bar"}'
          salt '*' state.apply my_sls_file pillar='{"hello": "world"}'

       Nested pillar values can also be set via the command line:

          salt '*' state.sls my_sls_file pillar='{"foo": {"bar": "baz"}}'

       Lists can be passed via command line pillar data as follows:

          salt '*' state.sls my_sls_file pillar='{"some_list": ["foo", "bar", "baz"]}'

       NOTE:
          If  a key is passed on the command line that already exists on the minion, the key that
          is passed in will overwrite the entire value of that key, rather than merging only  the
          specified value set via the command line.

       The  example  below  will  swap  the value for vim with telnet in the previously specified
       list, notice the nested pillar dict:

          salt '*' state.apply edit.vim pillar='{"pkgs": {"vim": "telnet"}}'

       This will attempt to install telnet on your minions, feel free to uninstall the package or
       replace telnet value with anything else.

       NOTE:
          Be  aware  that  when  sending  sensitive  data via pillar on the command-line that the
          publication containing that data will be received  by  all  minions  and  will  not  be
          restricted  to  the  targeted  minions.  This  may represent a security concern in some
          cases.

   More On Pillar
       Pillar data is generated on the Salt master and securely distributed to minions.  Salt  is
       not restricted to the pillar sls files when defining the pillar but can retrieve data from
       external sources. This can be useful when information about an infrastructure is stored in
       a separate location.

       Reference information on pillar and the external pillar interface can be found in the Salt
       documentation:

       Pillar

   Minion Config in Pillar
       Minion configuration options can be set on pillars. Any option that you  want  to  modify,
       should  be  in  the first level of the pillars, in the same way you set the options in the
       config file. For example, to configure the MySQL root password to be used  by  MySQL  Salt
       execution module:

          mysql.pass: hardtoguesspassword

       This  is  very convenient when you need some dynamic configuration change that you want to
       be applied on the fly. For example, there is a chicken and the egg problem if you do this:

          mysql-admin-passwd:
            mysql_user.present:
              - name: root
              - password: somepasswd

          mydb:
            mysql_db.present

       The second state will fail, because you changed the root password and  the  minion  didn’t
       notice  it.  Setting mysql.pass in the pillar, will help to sort out the issue. But always
       change the root admin password in the first place.

       This is very helpful for any module that needs credentials to apply state changes:  mysql,
       keystone, etc.

   Targeting Minions
       Targeting  minions  is specifying which minions should run a command or execute a state by
       matching against hostnames, or system information, or defined groups, or even combinations
       thereof.

       For example the command salt web1 apache.signal restart to restart the Apache httpd server
       specifies the machine web1 as the target and the command will only  be  run  on  that  one
       minion.

       Similarly  when  using  States, the following top file specifies that only the web1 minion
       should execute the contents of webserver.sls:

          base:
            'web1':
              - webserver

       The simple target specifications, glob, regex, and list will cover many use cases, and for
       some will cover all use cases, but more powerful options exist.

   Targeting with Grains
       The  Grains  interface  was  built  into  Salt  to  allow minions to be targeted by system
       properties. So minions running on a particular operating system can be called to execute a
       function, or a specific kernel.

       Calling  via  a  grain  is done by passing the -G option to salt, specifying a grain and a
       glob expression to match the value of the grain. The syntax for the target  is  the  grain
       key followed by a glob expression: “os:Arch*”.

          salt -G 'os:Fedora' test.ping

       Will return True from all of the minions running Fedora.

       To  discover  what  grains  are available and what the values are, execute the grains.item
       salt function:

          salt '*' grains.items

       More info on using targeting with grains can be found here.

   Compound Targeting
       New in version 0.9.5.

       Multiple target interfaces can be used in conjunction to determine  the  command  targets.
       These  targets can then be combined using and or or statements.  This is well defined with
       an example:

          salt -C 'G@os:Debian and webser* or E@db.*' test.ping

       In this example any minion who’s id starts with webser  and  is  running  Debian,  or  any
       minion who’s id starts with db will be matched.

       The  type  of matcher defaults to glob, but can be specified with the corresponding letter
       followed by the @ symbol. In the above example a grain is  used  with  G@  as  well  as  a
       regular  expression with E@. The webser* target does not need to be prefaced with a target
       type specifier because it is a glob.

       More info on using compound targeting can be found here.

   Node Group Targeting
       New in version 0.9.5.

       For certain cases, it can be convenient to have a predefined group of minions on which  to
       execute  commands.  This  can be accomplished using what are called nodegroups. Nodegroups
       allow for predefined compound targets to be declared in the master configuration file,  as
       a sort of shorthand for having to type out complicated compound expressions.

          nodegroups:
            group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com'
            group2: 'G@os:Debian and foo.domain.com'
            group3: 'G@os:Debian and N@group1'

   Advanced Targeting Methods
       There are many ways to target individual minions or groups of minions in Salt:

   Matching the minion id
       Each  minion needs a unique identifier. By default when a minion starts for the first time
       it chooses its FQDN as that identifier. The minion id can be overridden via  the  minion’s
       id configuration setting.

       TIP:
          minion id and minion keys

          The  minion  id  is  used  to  generate the minion’s public/private keys and if it ever
          changes the master must then accept the new key as though the minion was a new host.

   Globbing
       The default matching that Salt utilizes is shell-style globbing around the minion id. This
       also works for states in the top file.

       NOTE:
          You  must  wrap salt calls that use globbing in single-quotes to prevent the shell from
          expanding the globs before Salt is invoked.

       Match all minions:

          salt '*' test.ping

       Match all minions in the example.net domain or any of the example domains:

          salt '*.example.net' test.ping
          salt '*.example.*' test.ping

       Match all the webN minions in the example.net domain (web1.example.net, web2.example.netwebN.example.net):

          salt 'web?.example.net' test.ping

       Match the web1 through web5 minions:

          salt 'web[1-5]' test.ping

       Match the web1 and web3 minions:

          salt 'web[1,3]' test.ping

       Match the web-x, web-y, and web-z minions:

          salt 'web-[x-z]' test.ping

       NOTE:
          For additional targeting methods please review the compound matchers documentation.

   Regular Expressions
       Minions  can  be  matched  using Perl-compatible regular expressions (which is globbing on
       steroids and a ton of caffeine).

       Match both web1-prod and web1-devel minions:

          salt -E 'web1-(prod|devel)' test.ping

       When using regular expressions in a State’s top file, you must specify the matcher as  the
       first  option.  The  following  example  executes  the  contents  of  webserver.sls on the
       above-mentioned minions.

          base:
            'web1-(prod|devel)':
            - match: pcre
            - webserver

   Lists
       At the most basic level, you can specify a flat list of minion IDs:

          salt -L 'web1,web2,web3' test.ping

   Targeting using Grains
       Grain data can be used when targeting minions.

       For example, the following matches all CentOS minions:

          salt -G 'os:CentOS' test.ping

       Match all minions with 64-bit CPUs, and return number  of  CPU  cores  for  each  matching
       minion:

          salt -G 'cpuarch:x86_64' grains.item num_cpus

       Additionally,  globs  can  be  used  in  grain  matches,  and  grains that are nested in a
       dictionary can be matched by adding a  colon  for  each  level  that  is  traversed.   For
       example, the following will match hosts that have a grain called ec2_tags, which itself is
       a dictionary with a key named environment, which  has  a  value  that  contains  the  word
       production:

          salt -G 'ec2_tags:environment:*production*'

       IMPORTANT:
          See Is Targeting using Grain Data Secure? for important security information.

   Targeting using Pillar
       Pillar  data  can  be  used  when  targeting minions. This allows for ultimate control and
       flexibility when targeting minions.

       NOTE:
          To start using Pillar targeting it is required to make a  Pillar  data  cache  on  Salt
          Master for each Minion via following commands: salt '*' saltutil.refresh_pillar or salt
          '*' saltutil.sync_all.  Also Pillar data cache will be populated during  the  highstate
          run. Once Pillar data changes, you must refresh the cache by running above commands for
          this targeting method to work correctly.

       Example:

          salt -I 'somekey:specialvalue' test.ping

       Like with Grains, it is possible to use globbing as well as match nested values in Pillar,
       by  adding  colons  for  each level that is being traversed. The below example would match
       minions with a pillar named foo, which is a dict  containing  a  key  bar,  with  a  value
       beginning with baz:

          salt -I 'foo:bar:baz*' test.ping

   Subnet/IP Address Matching
       Minions can easily be matched based on IP address, or by subnet (using CIDR notation).

          salt -S 192.168.40.20 test.ping
          salt -S 2001:db8::/64 test.ping

       Ipcidr matching can also be used in compound matches

          salt -C 'S@10.0.0.0/24 and G@os:Debian' test.ping

       It is also possible to use in both pillar and state-matching

          '172.16.0.0/12':
             - match: ipcidr
             - internal

   Compound matchers
       Compound  matchers  allow very granular minion targeting using any of Salt’s matchers. The
       default matcher is a glob match, just as with CLI and top file matching.  To  match  using
       anything  other  than a glob, prefix the match string with the appropriate letter from the
       table below, followed by an @ sign.

        ┌───────┬───────────────────┬──────────────────────────────────────────┬────────────────┐
        │Letter │ Match Type        │ Example                                  │ Alt Delimiter? │
        ├───────┼───────────────────┼──────────────────────────────────────────┼────────────────┤
        │G      │ Grains glob       │ G@os:Ubuntu                              │ Yes            │
        ├───────┼───────────────────┼──────────────────────────────────────────┼────────────────┤
        │E      │ PCRE Minion ID    │ E@web\d+\.(dev|qa|prod)\.loc             │ No             │
        ├───────┼───────────────────┼──────────────────────────────────────────┼────────────────┤
        │P      │ Grains PCRE       │ P@os:(RedHat|Fedora|CentOS)              │ Yes            │
        ├───────┼───────────────────┼──────────────────────────────────────────┼────────────────┤
        │L      │ List of minions   │ L@minion1.example.com,minion3.domain.com │ No             │
        │       │                   │ or bl*.domain.com                        │                │
        ├───────┼───────────────────┼──────────────────────────────────────────┼────────────────┤
        │I      │ Pillar glob       │ I@pdata:foobar                           │ Yes            │
        ├───────┼───────────────────┼──────────────────────────────────────────┼────────────────┤
        │J      │ Pillar PCRE       │ J@pdata:^(foo|bar)$                      │ Yes            │
        ├───────┼───────────────────┼──────────────────────────────────────────┼────────────────┤
        │S      │ Subnet/IP address │ S@192.168.1.0/24 or S@192.168.1.100      │ No             │
        ├───────┼───────────────────┼──────────────────────────────────────────┼────────────────┤
        │R      │ Range cluster     │ R@%foo.bar                               │ No             │
        └───────┴───────────────────┴──────────────────────────────────────────┴────────────────┘

       Matchers can be joined using boolean and, or, and not operators.

       For example, the following string matches all Debian minions with a hostname  that  begins
       with  webserv,  as  well  as  any  minions  that have a hostname which matches the regular
       expression web-dc1-srv.*:

          salt -C 'webserv* and G@os:Debian or E@web-dc1-srv.*' test.ping

       That same example expressed in a top file looks like the following:

          base:
            'webserv* and G@os:Debian or E@web-dc1-srv.*':
              - match: compound
              - webserver

       New in version 2015.8.0.

       Excluding a minion based on its ID is also possible:

          salt -C 'not web-dc1-srv' test.ping

       Versions prior to 2015.8.0 a leading not was not supported in compound  matches.  Instead,
       something like the following was required:

          salt -C '* and not G@kernel:Darwin' test.ping

       Excluding a minion based on its ID was also possible:

          salt -C '* and not web-dc1-srv' test.ping

   Precedence Matching
       Matchers can be grouped together with parentheses to explicitly declare precedence amongst
       groups.

          salt -C '( ms-1 or G@id:ms-3 ) and G@id:ms-3' test.ping

       NOTE:
          Be certain to note that spaces  are  required  between  the  parentheses  and  targets.
          Failing to obey this rule may result in incorrect targeting!

   Alternate Delimiters
       New in version 2015.8.0.

       Matchers  that  target  based on a key value pair use a colon (:) as a delimiter. Matchers
       with a Yes in the Alt Delimiters column  in  the  previous  table  support  specifying  an
       alternate delimiter character.

       This  is  done  by specifying an alternate delimiter character between the leading matcher
       character and the @ pattern separator character. This avoids incorrect  interpretation  of
       the pattern in the case that : is part of the grain or pillar data structure traversal.

          salt -C 'J|@foo|bar|^foo:bar$ or J!@gitrepo!https://github.com:example/project.git' test.ping

   Node groups
       Nodegroups  are  declared  using  a  compound  target  specification.  The compound target
       documentation can be found here.

       The nodegroups master config file parameter  is  used  to  define  nodegroups.  Here’s  an
       example nodegroup configuration within /etc/salt/master:

          nodegroups:
            group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com'
            group2: 'G@os:Debian and foo.domain.com'
            group3: 'G@os:Debian and N@group1'
            group4:
              - 'G@foo:bar'
              - 'or'
              - 'G@foo:baz'

       NOTE:
          The  L  within  group1 is matching a list of minions, while the G in group2 is matching
          specific grains. See the compound matchers documentation for more details.

          As of the 2017.7.0 release of Salt, group names can also be prepended with a dash. This
          brings the usage in line with many other areas of Salt. For example:

              nodegroups:
                - group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com'

       New in version 2015.8.0.

       NOTE:
          Nodegroups  can  reference  other nodegroups as seen in group3.  Ensure that you do not
          have circular references.  Circular references  will  be  detected  and  cause  partial
          expansion with a logged error message.

       New in version 2015.8.0.

       Compound  nodegroups  can  be  either  string  values or lists of string values.  When the
       nodegroup is A string value will be tokenized by splitting on whitespace.  This may  be  a
       problem  if  whitespace  is necessary as part of a pattern.  When a nodegroup is a list of
       strings then tokenization will happen for each list element as a whole.

       To match a nodegroup on the CLI, use the -N command-line option:

          salt -N group1 test.ping

       NOTE:
          The N@ classifier cannot be used in compound matches within the CLI or top file, it  is
          only recognized in the nodegroups master config file parameter.

       To  match  a  nodegroup  in your top file, make sure to put - match: nodegroup on the line
       directly following the nodegroup name.

          base:
            group1:
              - match: nodegroup
              - webserver

       NOTE:
          When adding or modifying nodegroups to a master configuration file, the master must  be
          restarted for those changes to be fully recognized.

          A  limited amount of functionality, such as targeting with -N from the command-line may
          be available without a restart.

   Defining Nodegroups as Lists of Minion IDs
       A simple list of minion IDs would traditionally be defined like this:

          nodegroups:
            group1: L@host1,host2,host3

       They can now also be defined as a YAML list, like this:

          nodegroups:
            group1:
              - host1
              - host2
              - host3

       New in version 2016.11.0.

   Batch Size
       The -b (or --batch-size) option allows commands to be executed on only a specified  number
       of minions at a time. Both percentages and finite numbers are supported.

          salt '*' -b 10 test.ping

          salt -G 'os:RedHat' --batch-size 25% apache.signal restart

       This  will  only  run  test.ping  on 10 of the targeted minions at a time and then restart
       apache on 25% of the minions matching os:RedHat at a time and work through them all  until
       the  task  is  complete.  This  makes  jobs like rolling web server restarts behind a load
       balancer or doing maintenance on BSD firewalls using carp much easier with salt.

       The batch system maintains a window of running minions, so, if there are a  total  of  150
       minions  targeted  and  the batch size is 10, then the command is sent to 10 minions, when
       one minion returns then the command is sent to one additional minion, so that the  job  is
       constantly running on 10 minions.

       New in version 2016.3.

       The  --batch-wait  argument  can  be  used  to specify a number of seconds to wait after a
       minion returns, before sending the command to a new minion.

   SECO Range
       SECO range is a cluster-based metadata store developed and maintained by Yahoo!

       The Range project is hosted here:

       https://github.com/ytoolshed/range

       Learn more about range here:

       https://github.com/ytoolshed/range/wiki/

   Prerequisites
       To utilize range support in Salt, a range server is required. Setting up a range server is
       outside the scope of this document. Apache modules are included in the range distribution.

       With  a  working  range  server, cluster files must be defined. These files are written in
       YAML and define hosts contained inside a cluster. Full documentation on writing YAML range
       files is here:

       https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec

       Additionally,  the  Python  seco range libraries must be installed on the salt master. One
       can verify that they have been installed correctly via the following command:

          python -c 'import seco.range'

       If no errors are returned, range is installed successfully on the salt master.

   Preparing Salt
       Range support must be enabled on the salt master by setting the hostname and port  of  the
       range server inside the master configuration file:

          range_server: my.range.server.com:80

       Following this, the master must be restarted for the change to have an effect.

   Targeting with Range
       Once a cluster has been defined, it can be targeted with a salt command by using the -R or
       --range flags.

       For example, given the following range YAML file being served from a range server:

          $ cat /etc/range/test.yaml
          CLUSTER: host1..100.test.com
          APPS:
            - frontend
            - backend
            - mysql

       One might target host1 through host100 in the test.com domain with Salt as follows:

          salt --range %test:CLUSTER test.ping

       The following salt command would target three hosts: frontend, backend, and mysql:

          salt --range %test:APPS test.ping

   The Salt Mine
       The Salt Mine is used to collect arbitrary data from Minions and store it on  the  Master.
       This data is then made available to all Minions via the salt.modules.mine module.

       Mine data is gathered on the Minion and sent back to the Master where only the most recent
       data is maintained (if long term data is  required  use  returners  or  the  external  job
       cache).

   Mine vs Grains
       Mine  data is designed to be much more up-to-date than grain data. Grains are refreshed on
       a very limited basis and are largely static data. Mines are designed to replace slow  peer
       publishing  calls  when  Minions need data from other Minions. Rather than having a Minion
       reach out to all the other Minions for a piece of data, the  Salt  Mine,  running  on  the
       Master, can collect it from all the Minions every Mine Interval, resulting in almost fresh
       data at any given time, with much less overhead.

   Mine Functions
       To enable the Salt Mine the mine_functions option needs to be applied to  a  Minion.  This
       option  can  be  applied  via the Minion’s configuration file, or the Minion’s Pillar. The
       mine_functions option dictates what functions are being executed and allows for  arguments
       to be passed in.  The list of functions are available in the salt.module.  If no arguments
       are passed, an empty list must be added like in the  test.ping  function  in  the  example
       below:

          mine_functions:
            test.ping: []
            network.ip_addrs:
              interface: eth0
              cidr: '10.0.0.0/8'

       In  the  example above salt.modules.network.ip_addrs has additional filters to help narrow
       down the results.  In the above example IP addresses are only returned if they  are  on  a
       eth0 interface and in the 10.0.0.0/8 IP range.

   Mine Functions Aliases
       Function  aliases  can  be  used  to  provide friendly names, usage intentions or to allow
       multiple calls of the same function with different arguments. There is a different  syntax
       for  passing positional and key-value arguments. Mixing positional and key-value arguments
       is not supported.

       New in version 2014.7.0.

          mine_functions:
            network.ip_addrs: [eth0]
            networkplus.internal_ip_addrs: []
            internal_ip_addrs:
              mine_function: network.ip_addrs
              cidr: 192.168.0.0/16
            ip_list:
              - mine_function: grains.get
              - ip_interfaces

   Mine Interval
       The Salt Mine functions are executed when the Minion starts and at a given interval by the
       scheduler. The default interval is every 60 minutes and can be adjusted for the Minion via
       the mine_interval option:

          mine_interval: 60

   Mine in Salt-SSH
       As of the 2015.5.0 release of salt, salt-ssh supports mine.get.

       Because the Minions cannot provide their own mine_functions configuration, we retrieve the
       args for specified mine functions in one of three places, searched in the following order:

       1. Roster data

       2. Pillar

       3. Master config

       The  mine_functions  are  formatted  exactly  the same as in normal salt, just stored in a
       different location. Here is an example of a flat roster containing mine_functions:

          test:
            host: 104.237.131.248
            user: root
            mine_functions:
              cmd.run: ['echo "hello!"']
              network.ip_addrs:
                interface: eth0

       NOTE:
          Because of the differences in the architecture of salt-ssh, mine.get calls are somewhat
          inefficient.  Salt  must make a new salt-ssh call to each of the Minions in question to
          retrieve the requested data, much like a publish call. However, unlike publish, it must
          run  the requested function as a wrapper function, so we can retrieve the function args
          from the pillar of the Minion in question. This  results  in  a  non-trivial  delay  in
          retrieving the requested data.

   Minions Targeting with Mine
       The  mine.get  function  supports  various methods of Minions targeting to fetch Mine data
       from particular hosts, such as glob or regular expression matching on  Minion  id  (name),
       grains,  pillars  and compound matches. See the salt.modules.mine module documentation for
       the reference.

       NOTE:
          Pillar data needs to be cached on Master for pillar targeting to work with  Mine.  Read
          the note in relevant section.

   Example
       One  way  to  use data from Salt Mine is in a State. The values can be retrieved via Jinja
       and used in the SLS file. The following example is a partial  HAProxy  configuration  file
       and  pulls  IP  addresses from all Minions with the “web” grain to add them to the pool of
       load balanced servers.

       /srv/pillar/top.sls:

          base:
            'G@roles:web':
              - web

       /srv/pillar/web.sls:

          mine_functions:
            network.ip_addrs: [eth0]

       Then trigger the minions to refresh their pillar data by running:

          salt '*' saltutil.refresh_pillar

       Verify that the results are showing up in the pillar  on  the  minions  by  executing  the
       following and checking for network.ip_addrs in the output:

          salt '*' pillar.items

       Which should show that the function is present on the minion, but not include the output:

          minion1.example.com:
              ----------
              mine_functions:
                  ----------
                  network.ip_addrs:
                      - eth0

       Mine  data  is typically only updated on the master every 60 minutes, this can be modified
       by setting:

       /etc/salt/minion.d/mine.conf:

          mine_interval: 5

       To force the mine data to update immediately run:

          salt '*' mine.update

       Setup the salt.states.file.managed state in /srv/salt/haproxy.sls:

          haproxy_config:
            file.managed:
              - name: /etc/haproxy/config
              - source: salt://haproxy_config
              - template: jinja

       Create the Jinja template in /srv/salt/haproxy_config:

          <...file contents snipped...>

          {% for server, addrs in salt['mine.get']('roles:web', 'network.ip_addrs', tgt_type='grain') | dictsort() %}
          server {{ server }} {{ addrs[0] }}:80 check
          {% endfor %}

          <...file contents snipped...>

       In the above example, server will be expanded to the minion_id.

       NOTE:
          The expr_form argument will be renamed to tgt_type in the 2017.7.0 release of Salt.

   Runners
       Salt runners are convenience applications executed with the salt-run command.

       Salt runners work similarly to Salt execution modules however they  execute  on  the  Salt
       master itself instead of remote Salt minions.

       A Salt runner can be a simple client call or a complex application.

       SEE ALSO:
          The full list of runners

   Writing Salt Runners
       A  Salt runner is written in a similar manner to a Salt execution module.  Both are Python
       modules which contain functions and each public function is a runner which may be executed
       via the salt-run command.

       For  example,  if  a  Python  module named test.py is created in the runners directory and
       contains a function called foo, the test  runner  could  be  invoked  with  the  following
       command:

          # salt-run test.foo

       Runners have several options for controlling output.

       Any  print  statement  in  a  runner is automatically also fired onto the master event bus
       where. For example:

          def a_runner(outputter=None, display_progress=False):
              print('Hello world')
              ...

       The above would result in an event fired as follows:

          Event fired at Tue Jan 13 15:26:45 2015
          *************************
          Tag: salt/run/20150113152644070246/print
          Data:
          {'_stamp': '2015-01-13T15:26:45.078707',
           'data': 'hello',
            'outputter': 'pprint'}

       A runner may also send a progress event, which is displayed  to  the  user  during  runner
       execution  and  is  also passed across the event bus if the display_progress argument to a
       runner is set to True.

       A custom runner may send its own progress event  by  using  the  __jid_event_.fire_event()
       method as shown here:

          if display_progress:
              __jid_event__.fire_event({'message': 'A progress message'}, 'progress')

       The  above  would  produce output on the console reading: A progress message as well as an
       event on the event similar to:

          Event fired at Tue Jan 13 15:21:20 2015
          *************************
          Tag: salt/run/20150113152118341421/progress
          Data:
          {'_stamp': '2015-01-13T15:21:20.390053',
           'message': "A progress message"}

       A runner could use the same approach to send an event with a customized tag onto the event
       bus  by  replacing  the  second argument (progress) with whatever tag is desired. However,
       this will not be shown on the command-line and will only be fired onto the event bus.

   Synchronous vs. Asynchronous
       A runner may be fired asynchronously which will immediately return control. In this  case,
       no  output will be display to the user if salt-run is being used from the command-line. If
       used programmatically, no results will be returned.  If results are desired, they must  be
       gathered  either by firing events on the bus from the runner and then watching for them or
       by some other means.

       NOTE:
          When running a runner in asynchronous mode, the --progress flag will not deliver output
          to the salt-run CLI. However, progress events will still be fired on the bus.

       In  synchronous  mode, which is the default, control will not be returned until the runner
       has finished executing.

       To add custom runners, put them in a directory and add it to  runner_dirs  in  the  master
       configuration file.

   Examples
       Examples of runners can be found in the Salt distribution:

       https://github.com/saltstack/salt/blob/develop/salt/runners

       A  simple  runner that returns a well-formatted list of the minions that are responding to
       Salt calls could look like this:

          # Import salt modules
          import salt.client

          def up():
              '''
              Print a list of all of the minions that are up
              '''
              client = salt.client.LocalClient(__opts__['conf_file'])
              minions = client.cmd('*', 'test.ping', timeout=1)
              for minion in sorted(minions):
                  print minion

   Salt Engines
       New in version 2015.8.0.

       Salt Engines are long-running, external system processes that leverage Salt.

       · Engines have access to Salt configuration, execution  modules,  and  runners  (__opts__,
         __salt__, and __runners__).

       · Engines  are  executed in a separate process that is monitored by Salt. If a Salt engine
         stops, it is restarted automatically.

       · Engines can run on the Salt master and on Salt minions.

       Salt engines enhance and replace the external processes functionality.

   Configuration
       Salt engines are configured under an engines top-level section in your Salt master or Salt
       minion configuration. Provide a list of engines and parameters under this section.

          engines:
            - logstash:
                host: log.my_network.com
                port: 5959
                proto: tcp

       Salt engines must be in the Salt path, or you can add the engines_dirs option in your Salt
       master configuration with a list of directories under which Salt  attempts  to  find  Salt
       engines. This option should be formatted as a list of directories to search, such as:

          engines_dirs:
            - /home/bob/engines

   Writing an Engine
       An                        example                       Salt                       engine,
       https://github.com/saltstack/salt/blob/develop/salt/engines/test.py, is available  in  the
       Salt  source. To develop an engine, the only requirement is that your module implement the
       start() function.

   Understanding YAML
       The default renderer for SLS files is the YAML renderer. YAML is a  markup  language  with
       many  powerful  features.  However,  Salt  uses a small subset of YAML that maps over very
       commonly used data structures, like lists and dictionaries. It is  the  job  of  the  YAML
       renderer  to  take the YAML data structure and compile it into a Python data structure for
       use by Salt.

       Though YAML syntax may seem daunting and terse at first, there are only three very  simple
       rules to remember when writing YAML for SLS files.

   Rule One: Indentation
       YAML  uses a fixed indentation scheme to represent relationships between data layers. Salt
       requires that the indentation for each level consists of exactly two spaces.  Do  not  use
       tabs.

   Rule Two: Colons
       Python dictionaries are, of course, simply key-value pairs. Users from other languages may
       recognize this data type as hashes or associative arrays.

       Dictionary keys are represented in YAML as strings terminated by a trailing colon.  Values
       are represented by either a string following the colon, separated by a space:

          my_key: my_value

       In Python, the above maps to:

          {'my_key': 'my_value'}

       Alternatively, a value can be associated with a key through indentation.

          my_key:
            my_value

       NOTE:
          The  above  syntax  is  valid YAML but is uncommon in SLS files because most often, the
          value for a key is not singular but instead is a list of values.

       In Python, the above maps to:

          {'my_key': 'my_value'}

       Dictionaries can be nested:

          first_level_dict_key:
            second_level_dict_key: value_in_second_level_dict

       And in Python:

          {
              'first_level_dict_key': {
                  'second_level_dict_key': 'value_in_second_level_dict'
              }
          }

   Rule Three: Dashes
       To represent lists of items, a single dash followed by a space is used. Multiple items are
       a part of the same list as a function of their having the same level of indentation.

          - list_value_one
          - list_value_two
          - list_value_three

       Lists can be the value of a key-value pair. This is quite common in Salt:

          my_dictionary:
            - list_value_one
            - list_value_two
            - list_value_three

       In Python, the above maps to:

          {'my_dictionary': ['list_value_one', 'list_value_two', 'list_value_three']}

   Learning More
       One  easy way to learn more about how YAML gets rendered into Python data structures is to
       use an online YAML parser to see the Python output.

       One    excellent    choice     for     experimenting     with     YAML     parsing     is:
       http://yaml-online-parser.appspot.com/

   Templating
       Jinja  statements  and  expressions are allowed by default in SLS files. See Understanding
       Jinja.

   Understanding Jinja
       Jinja is the default templating language in SLS files.

   Jinja in States
       Jinja is evaluated before YAML, which means it is evaluated before the States are run.

       The most basic usage of  Jinja  in  state  files  is  using  control  structures  to  wrap
       conditional or redundant state elements:

          {% if grains['os'] != 'FreeBSD' %}
          tcsh:
              pkg:
                  - installed
          {% endif %}

          motd:
            file.managed:
              {% if grains['os'] == 'FreeBSD' %}
              - name: /etc/motd
              {% elif grains['os'] == 'Debian' %}
              - name: /etc/motd.tail
              {% endif %}
              - source: salt://motd

       In  this example, the first if block will only be evaluated on minions that aren’t running
       FreeBSD, and the second block changes the file name based on the os grain.

       Writing if-else blocks can lead to very redundant state files however. In this case, using
       pillars, or using a previously defined variable might be easier:

          {% set motd = ['/etc/motd'] %}
          {% if grains['os'] == 'Debian' %}
            {% set motd = ['/etc/motd.tail', '/var/run/motd'] %}
          {% endif %}

          {% for motdfile in motd %}
          {{ motdfile }}:
            file.managed:
              - source: salt://motd
          {% endfor %}

       Using  a  variable  set  by  the template, the for loop will iterate over the list of MOTD
       files to update, adding a state block for each file.

       The filter_by function can also be used to set variables based on grains:

          {% set auditd = salt['grains.filter_by']({
          'RedHat': { 'package': 'audit' },
          'Debian': { 'package': 'auditd' },
          }) %}

   Include and Import
       Includes and imports can be used to share common,  reusable  state  configuration  between
       state files and between files.

          {% from 'lib.sls' import test %}

       This  would  import  the test template variable or macro, not the test state element, from
       the file lib.sls. In the case that the included file performs checks  against  grains,  or
       something  else  that  requires  context,  passing  the  context into the included file is
       required:

          {% from 'lib.sls' import test with context %}

       Includes must use full paths, like so: spam/eggs.jinja.INDENT 0.0

           {% include 'spam/foobar.jinja' %}

   Including Context During Include/Import
       By adding with context to the include/import directive, the current context can be  passed
       to an included/imported template.

          {% import 'openssl/vars.sls' as ssl with context %}

   Macros
       Macros   are   helpful   for  eliminating  redundant  code.  Macros  are  most  useful  as
       mini-templates to repeat blocks of strings with a few parameterized variables.   Be  aware
       that  stripping  whitespace  from  the template block, as well as contained blocks, may be
       necessary to emulate a variable return from the macro.

          # init.sls
          {% from 'lib.sls' import pythonpkg with context %}

          python-virtualenv:
            pkg.installed:
              - name: {{ pythonpkg('virtualenv') }}

          python-fabric:
            pkg.installed:
              - name: {{ pythonpkg('fabric') }}

          # lib.sls
          {% macro pythonpkg(pkg) -%}
            {%- if grains['os'] == 'FreeBSD' -%}
              py27-{{ pkg }}
            {%- elif grains['os'] == 'Debian' -%}
              python-{{ pkg }}
            {%- endif -%}
          {%- endmacro %}

       This would define a macro that would return a string of the full package  name,  depending
       on  the  packaging system’s naming convention. The whitespace of the macro was eliminated,
       so that the macro would return a string without line breaks, using whitespace control.

   Template Inheritance
       Template inheritance works fine from state files and files. The search path starts at  the
       root of the state tree or pillar.

   Errors
       Saltstack allows raising custom errors using the raise jinja function.

          {{ raise('Custom Error') }}

       When  rendering  the template containing the above statement, a TemplateError exception is
       raised, causing the rendering to fail with the following message:

          TemplateError: Custom Error

   Filters
       Saltstack extends builtin filters with these custom filters:

   strftime
       Converts any time related object into a time based  string.  It  requires  valid  strftime
       directives. An exhaustive list can be found here in the Python documentation.

          {% set curtime = None | strftime() %}

       Fuzzy dates require the timelib Python module is installed.

          {{ "2002/12/25"|strftime("%y") }}
          {{ "1040814000"|strftime("%Y-%m-%d") }}
          {{ datetime|strftime("%u") }}
          {{ "tomorrow"|strftime }}

   sequence
       Ensure that parsed data is a sequence.

   yaml_encode
       Serializes  a  single  object  into a YAML scalar with any necessary handling for escaping
       special characters.  This  will  work  for  any  scalar  YAML  data  type:  ints,  floats,
       timestamps,  booleans,  strings,  unicode.   It  will  not  work for multi-objects such as
       sequences or maps.

          {%- set bar = 7 %}
          {%- set baz = none %}
          {%- set zip = true %}
          {%- set zap = 'The word of the day is "salty"' %}

          {%- load_yaml as foo %}
          bar: {{ bar|yaml_encode }}
          baz: {{ baz|yaml_encode }}
          baz: {{ zip|yaml_encode }}
          baz: {{ zap|yaml_encode }}
          {%- endload %}

       In the above case {{ bar }} and {{ foo.bar }} should be identical and {{  baz  }}  and  {{
       foo.baz }} should be identical.

   yaml_dquote
       Serializes  a  string  into  a properly-escaped YAML double-quoted string.  This is useful
       when the contents of a string are unknown and may contain quotes or unicode that needs  to
       be  preserved.   The  resulting  string  will  be  emitted with opening and closing double
       quotes.

          {%- set bar = '"The quick brown fox . . ."' %}
          {%- set baz = 'The word of the day is "salty".' %}

          {%- load_yaml as foo %}
          bar: {{ bar|yaml_dquote }}
          baz: {{ baz|yaml_dquote }}
          {%- endload %}

       In the above case {{ bar }} and {{ foo.bar }} should be identical and {{  baz  }}  and  {{
       foo.baz  }}  should  be identical.  If variable contents are not guaranteed to be a string
       then it is better to use yaml_encode which handles all YAML scalar types.

   yaml_squote
       Similar to the yaml_dquote filter but with single quotes.   Note  that  YAML  only  allows
       special  escapes  inside  double  quotes  so yaml_squote is not nearly as useful (viz. you
       likely want to use yaml_encode or yaml_dquote).

   to_bool
       New in version 2017.7.0.

       Returns the logical value of an element.

       Example:

          {{ 'yes' | to_bool }}
          {{ 'true' | to_bool }}
          {{ 1 | to_bool }}
          {{ 'no' | to_bool }}

       Will be rendered as:

          True
          True
          True
          False

   exactly_n_true
       New in version 2017.7.0.

       Tests that exactly N items in an iterable are “truthy” (neither None, False, nor 0).

       Example:

          {{ ['yes', 0, False, 'True'] | exactly_n_true(2) }}

       Returns:

          True

   exactly_one_true
       New in version 2017.7.0.

       Tests that exactly one item in an iterable is “truthy” (neither None, False, nor 0).

       Example:

          {{ ['yes', False, 0, None] | exactly_one_true }}

       Returns:

          True

   quote
       New in version 2017.7.0.

       This text will be wrapped in quotes.

   regex_search
       New in version 2017.7.0.

       Scan through string looking for a location where this regular expression produces a match.
       Returns None in case there were no matches found

       Example:

          {{ 'abcdefabcdef' | regex_search('BC(.*)', ignorecase=True) }}

       Returns:

          ('defabcdef',)

   regex_match
       New in version 2017.7.0.

       If  zero  or  more  characters  at  the beginning of string match this regular expression,
       otherwise returns None.

       Example:

          {{ 'abcdefabcdef' | regex_match('BC(.*)', ignorecase=True) }}

       Returns:

          None

   uuid
       New in version 2017.7.0.

       Return a UUID.

       Example:

          {{ 'random' | uuid }}

       Returns:

          3652b285-26ad-588e-a5dc-c2ee65edc804

   is_list
       New in version 2017.7.0.

       Return if an object is list.

       Example:

          {{ [1, 2, 3] | is_list }}

       Returns:

          True

   is_iter
       New in version 2017.7.0.

       Return if an object is iterable.

       Example:

          {{ [1, 2, 3] | is_iter }}

       Returns:

          True

   min
       New in version 2017.7.0.

       Return the minimum value from a list.

       Example:

          {{ [1, 2, 3] | min }}

       Returns:

          1

   max
       New in version 2017.7.0.

       Returns the maximum value from a list.

       Example:

          {{ [1, 2, 3] | max }}

       Returns:

          3

   avg
       New in version 2017.7.0.

       Returns the average value of the elements of a list

       Example:

          {{ [1, 2, 3] | avg }}

       Returns:

          2

   union
       New in version 2017.7.0.

       Return the union of two lists.

       Example:

          {{ [1, 2, 3] | union([2, 3, 4]) | join(', ') }}

       Returns:

          1, 2, 3, 4

   intersect
       New in version 2017.7.0.

       Return the intersection of two lists.

       Example:

          {{ [1, 2, 3] | intersect([2, 3, 4]) | join(', ') }}

       Returns:

          2, 3

   difference
       New in version 2017.7.0.

       Return the difference of two lists.

       Example:

          {{ [1, 2, 3] | difference([2, 3, 4]) | join(', ') }}

       Returns:

          1

   symmetric_difference
       New in version 2017.7.0.

       Return the symmetric difference of two lists.

       Example:

          {{ [1, 2, 3] | symmetric_difference([2, 3, 4]) | join(', ') }}

       Returns:

          1, 4

   is_sorted
       New in version 2017.7.0.

       Return is an iterable object is already sorted.

       Example:

          {{ [1, 2, 3] | is_sorted }}

       Returns:

          True

   compare_lists
       New in version 2017.7.0.

       Compare two lists and return a dictionary with the changes.

       Example:

          {{ [1, 2, 3] | compare_lists([1, 2, 4]) }}

       Returns:

          {'new': 4, 'old': 3}

   compare_dicts
       New in version 2017.7.0.

       Compare two dictionaries and return a dictionary with the changes.

       Example:

          {{ {'a': 'b'} | compare_lists({'a': 'c'}) }}

       Returns:

          {'a': {'new': 'c', 'old': 'b'}}

   is_hex
       New in version 2017.7.0.

       Return True if the value is hexazecimal.

       Example:

          {{ '0xabcd' | is_hex }}
          {{ 'xyzt' | is_hex }}

       Returns:

          True
          False

   contains_whitespace
       New in version 2017.7.0.

       Return True if a text contains whitespaces.

       Example:

          {{ 'abcd' | contains_whitespace }}
          {{ 'ab cd' | contains_whitespace }}

       Returns:

          False
          True

   substring_in_list
       New in version 2017.7.0.

       Return is a substring is found in a list of string values.

       Example:

          {{ 'abcd' | substring_in_list(['this', 'is', 'an abcd example']) }}

       Returns:

          True

   check_whitelist_blacklist
       New in version 2017.7.0.

       Check a whitelist and/or blacklist to see if the value matches it.

       This filter can be used with  either  a  whitelist  or  a  blacklist  individually,  or  a
       whitelist and a blacklist can be passed simultaneously.

       If whitelist is used alone, value membership is checked against the whitelist only. If the
       value is found, the function returns True.  Otherwise, it returns False.

       If blacklist is used alone, value membership is checked against the blacklist only. If the
       value is found, the function returns False.  Otherwise, it returns True.

       If  both  a whitelist and a blacklist are provided, value membership in the blacklist will
       be examined first. If the value is not found in  the  blacklist,  then  the  whitelist  is
       checked. If the value isn’t found in the whitelist, the function returns False.

       Whitelist Example:

          {{ 5 | check_whitelist_blacklist(whitelist=[5, 6, 7]) }}

       Returns:

          True

       Blacklist Example:

          {{ 5 | check_whitelist_blacklist(blacklist=[5, 6, 7]) }}

          False

   date_format
       New in version 2017.7.0.

       Converts unix timestamp into human-readable string.

       Example:

          {{ 1457456400 | date_format }}
          {{ 1457456400 | date_format('%d.%m.%Y %H:%M') }}

       Returns:

          2017-03-08
          08.03.2017 17:00

   to_num
       New in version 2017.7.0.

       New in version 2018.3.0: Renamed from str_to_num to to_num.

       Converts a string to its numerical value.

       Example:

          {{ '5' | to_num }}

       Returns:

          5

   to_bytes
       New in version 2017.7.0.

       Converts string-type object to bytes.

       Example:

          {{ 'wall of text' | to_bytes }}

       NOTE:
          This  option may have adverse effects when using the default renderer, yaml_jinja. This
          is due to the fact that YAML requires proper handling in regard to special  characters.
          Please  see  the section on YAML ASCII support in the YAML Idiosyncracies documentation
          for more information.

   json_encode_list
       New in version 2017.7.0.

       New in version 2018.3.0: Renamed  from  json_decode_list  to  json_encode_list.  When  you
       encode  something  you  get  bytes,  and  when  you decode, you get your locale’s encoding
       (usually  a  unicode  type).  This  filter  was  incorrectly-named  when  it  was   added.
       json_decode_list will be supported until the Neon release.

       Deprecated  since  version  2018.3.3,Fluorine:  The  tojson  filter accomplishes what this
       filter was designed to do, making this filter redundant.

       Recursively encodes all string elements of the list to bytes.

       Example:

          {{ [1, 2, 3] | json_encode_list }}

       Returns:

          [1, 2, 3]

   json_encode_dict
       New in version 2017.7.0.

       New in version 2018.3.0: Renamed  from  json_decode_dict  to  json_encode_dict.  When  you
       encode  something  you  get  bytes,  and  when  you decode, you get your locale’s encoding
       (usually  a  unicode  type).  This  filter  was  incorrectly-named  when  it  was   added.
       json_decode_dict will be supported until the Neon release.

       Deprecated  since  version  2018.3.3,Fluorine:  The  tojson  filter accomplishes what this
       filter was designed to do, making this filter redundant.

       Recursively encodes all string items in the dictionary to bytes.

       Example:

       Assuming that pillar['foo'] contains {u'a': u'\u0414'}, and your locale is en_US.UTF-8:

          {{ pillar['foo'] | json_encode_dict }}

       Returns:

          {'a': '\xd0\x94'}

   tojson
       New in version 2018.3.3,Fluorine.

       Dumps a data structure to JSON.

       This filter was added to provide this functionality to hosts which have  a  Jinja  release
       older  than  version  2.9 installed. If Jinja 2.9 or newer is installed, then the upstream
       version of the filter will be used. See the upstream docs for more information.

   random_hash
       New in version 2017.7.0.

       New in version 2018.3.0: Renamed from rand_str to random_hash to more accurately  describe
       what the filter does. rand_str will be supported until the Neon release.

       Generates  a  random number between 1 and the number passed to the filter, and then hashes
       it. The default hash type is the one specified by the minion’s  hash_type  config  option,
       but an alternate hash type can be passed to the filter as an argument.

       Example:

          {% set num_range = 99999999 %}
          {{ num_range | random_hash }}
          {{ num_range | random_hash('sha512') }}

       Returns:

          43ec517d68b6edd3015b3edc9a11367b
          d94a45acd81f8e3107d237dbc0d5d195f6a52a0d188bc0284c0763ece1eac9f9496fb6a531a296074c87b3540398dace1222b42e150e67c9301383fde3d66ae5

   md5
       New in version 2017.7.0.

       Return the md5 digest of a string.

       Example:

          {{ 'random' | md5 }}

       Returns:

          7ddf32e17a6ac5ce04a8ecbf782ca509

   sha256
       New in version 2017.7.0.

       Return the sha256 digest of a string.

       Example:

          {{ 'random' | sha256 }}

       Returns:

          a441b15fe9a3cf56661190a0b93b9dec7d04127288cc87250967cf3b52894d11

   sha512
       New in version 2017.7.0.

       Return the sha512 digest of a string.

       Example:

          {{ 'random' | sha512 }}

       Returns:

          811a90e1c8e86c7b4c0eef5b2c0bf0ec1b19c4b1b5a242e6455be93787cb473cb7bc9b0fdeb960d00d5c6881c2094dd63c5c900ce9057255e2a4e271fc25fef1

   base64_encode
       New in version 2017.7.0.

       Encode a string as base64.

       Example:

          {{ 'random' | base64_encode }}

       Returns:

          cmFuZG9t

   base64_decode
       New in version 2017.7.0.

       Decode a base64-encoded string.

          {{ 'Z2V0IHNhbHRlZA==' | base64_decode }}

       Returns:

          get salted

   hmac
       New in version 2017.7.0.

       Verify  a  challenging  hmac signature against a string / shared-secret. Returns a boolean
       value.

       Example:

          {{ 'get salted' | hmac('shared secret', 'eBWf9bstXg+NiP5AOwppB5HMvZiYMPzEM9W5YMm/AmQ=') }}

       Returns:

          True

   http_query
       New in version 2017.7.0.

       Return the HTTP reply object from a URL.

       Example:

          {{ 'http://jsonplaceholder.typicode.com/posts/1' | http_query }}

       Returns:

          {
            'body': '{
              "userId": 1,
              "id": 1,
              "title": "sunt aut facere repellat provident occaecati excepturi option reprehenderit",
              "body": "quia et suscipit\\nsuscipit recusandae consequuntur expedita et cum\\nreprehenderit molestiae ut ut quas totam\\nnostrum rerum est autem sunt rem eveniet architecto"
            }'
          }

   traverse
       New in version 2018.3.3.

       Traverse a dict or list using a colon-delimited target  string.   The  target  ‘foo:bar:0’
       will  return  data[‘foo’][‘bar’][0]  if  this  value exists, and will otherwise return the
       provided default value.

       Example:

          {{ {'a1': {'b1': {'c1': 'foo'}}, 'a2': 'bar'} | traverse('a1:b1', 'default') }}

       Returns:

          {'c1': 'foo'}

          {{ {'a1': {'b1': {'c1': 'foo'}}, 'a2': 'bar'} | traverse('a2:b2', 'default') }}

       Returns:

          'default'

   Networking Filters
       The following networking-related filters are supported:

   is_ip
       New in version 2017.7.0.

       Return if a string is a valid IP Address.

          {{ '192.168.0.1' | is_ip }}

       Additionally accepts the following options:

       · global

       · link-local

       · loopback

       · multicast

       · private

       · public

       · reserved

       · site-local

       · unspecified

       Example - test if a string is a valid loopback IP address.

          {{ '192.168.0.1' | is_ip(options='loopback') }}

   is_ipv4
       New in version 2017.7.0.

       Returns if a string is a valid IPv4 address. Supports the same options as is_ip.

          {{ '192.168.0.1' | is_ipv4 }}

   is_ipv6
       New in version 2017.7.0.

       Returns if a string is a valid IPv6 address. Supports the same options as is_ip.

          {{ 'fe80::' | is_ipv6 }}

   ipaddr
       New in version 2017.7.0.

       From a list, returns only valid IP entries. Supports the same options as is_ip.  The  list
       can contains also IP interfaces/networks.

       Example:

          {{ ['192.168.0.1', 'foo', 'bar', 'fe80::'] | ipaddr }}

       Returns:

          ['192.168.0.1', 'fe80::']

   ipv4
       New in version 2017.7.0.

       From a list, returns only valid IPv4 entries. Supports the same options as is_ip. The list
       can contains also IP interfaces/networks.

       Example:

          {{ ['192.168.0.1', 'foo', 'bar', 'fe80::'] | ipv4 }}

       Returns:

          ['192.168.0.1']

   ipv6
       New in version 2017.7.0.

       From a list, returns only valid IPv6 entries. Supports the same options as is_ip. The list
       can contains also IP interfaces/networks.

       Example:

          {{ ['192.168.0.1', 'foo', 'bar', 'fe80::'] | ipv6 }}

       Returns:

          ['fe80::']

   network_hosts
       New in version 2017.7.0.

       Return the list of hosts within a networks. This utility works for both IPv4 and IPv6.

       NOTE:
          When  running this command with a large IPv6 network, the command will take a long time
          to gather all of the hosts.

       Example:

          {{ '192.168.0.1/30' | network_hosts }}

       Returns:

          ['192.168.0.1', '192.168.0.2']

   network_size
       New in version 2017.7.0.

       Return the size of the network. This utility works for both IPv4 and IPv6.

       Example:

          {{ '192.168.0.1/8' | network_size }}

       Returns:

          16777216

   gen_mac
       New in version 2017.7.0.

       Generates a MAC address with the defined OUI prefix.

       Common prefixes:

       · 00:16:3E – Xen

       · 00:18:51 – OpenVZ

       · 00:50:56 – VMware (manually generated)

       · 52:54:00 – QEMU/KVM

       · AC:DE:48 – PRIVATE

       Example:

          {{ '00:50' | gen_mac }}

       Returns:

          00:50:71:52:1C

   mac_str_to_bytes
       New in version 2017.7.0.

       Converts a string representing a valid MAC address to bytes.

       Example:

          {{ '00:11:22:33:44:55' | mac_str_to_bytes }}

       NOTE:
          This option may have adverse effects when using the default renderer, yaml_jinja.  This
          is  due to the fact that YAML requires proper handling in regard to special characters.
          Please see the section on YAML ASCII support in the YAML  Idiosyncracies  documentation
          for more information.

   dns_check
       New in version 2017.7.0.

       Return  the ip resolved by dns, but do not exit on failure, only raise an exception. Obeys
       system preference for IPv4/6 address resolution.

       Example:

          {{ 'www.google.com' | dns_check(port=443) }}

       Returns:

          '172.217.3.196'

   File filters
   is_text_file
       New in version 2017.7.0.

       Return if a file is text.

       Uses heuristics to guess whether the given file is text or binary,  by  reading  a  single
       block of bytes from the file.  If more than 30% of the chars in the block are non-text, or
       there are NUL (‘x00’) bytes in the block, assume this is a binary file.

       Example:

          {{ '/etc/salt/master' | is_text_file }}

       Returns:

          True

   is_binary_file
       New in version 2017.7.0.

       Return if a file is binary.

       Detects if the file is a binary, returns bool. Returns True if the file is a bin, False if
       the file is not and None if the file is not available.

       Example:

          {{ '/etc/salt/master' | is_binary_file }}

       Returns:

          False

   is_empty_file
       New in version 2017.7.0.

       Return if a file is empty.

       Example:

          {{ '/etc/salt/master' | is_empty_file }}

       Returns:

          False

   file_hashsum
       New in version 2017.7.0.

       Return the hashsum of a file.

       Example:

          {{ '/etc/salt/master' | file_hashsum }}

       Returns:

          02d4ef135514934759634f10079653252c7ad594ea97bd385480c532bca0fdda

   list_files
       New in version 2017.7.0.

       Return a recursive list of files under a specific path.

       Example:

          {{ '/etc/salt/' | list_files | join('\n') }}

       Returns:

          /etc/salt/master
          /etc/salt/proxy
          /etc/salt/minion
          /etc/salt/pillar/top.sls
          /etc/salt/pillar/device1.sls

   path_join
       New in version 2017.7.0.

       Joins absolute paths.

       Example:

          {{ '/etc/salt/' | path_join('pillar', 'device1.sls') }}

       Returns:

          /etc/salt/pillar/device1.sls

   which
       New in version 2017.7.0.

       Python clone of /usr/bin/which.

       Example:

          {{ 'salt-master' | which }}

       Returns:

          /usr/local/salt/virtualenv/bin/salt-master

   Tests
       Saltstack extends builtin tests with these custom tests:

   equalto
       Tests the equality between two values.

       Can be used in an if statement directly:

          {% if 1 is equalto(1) %}
              < statements >
          {% endif %}

       If clause evaluates to True

       or with the selectattr filter:

          {{ [{'value': 1}, {'value': 2} , {'value': 3}] | selectattr('value', 'equalto', 3) | list }}

       Returns:

          [{'value': 3}]

   match
       Tests that a string matches the regex passed as an argument.

       Can be used in a if statement directly:

          {% if 'a' is match('[a-b]') %}
              < statements >
          {% endif %}

       If clause evaluates to True

       or with the selectattr filter:

          {{ [{'value': 'a'}, {'value': 'b'}, {'value': 'c'}] | selectattr('value', 'match', '[b-e]') | list }}

       Returns:

          [{'value': 'b'}, {'value': 'c'}]

       Test supports additional optional arguments: ignorecase, multiline

   Escape filters
   regex_escape
       New in version 2017.7.0.

       Allows escaping of strings so they can be interpreted literally by another function.

       Example:

          regex_escape = {{ 'https://example.com?foo=bar%20baz' | regex_escape }}

       will be rendered as:

          regex_escape = https\:\/\/example\.com\?foo\=bar\%20baz

   Set Theory Filters
   unique
       New in version 2017.7.0.

       Performs set math using Jinja filters.

       Example:

          unique = {{ ['foo', 'foo', 'bar'] | unique }}

       will be rendered as:

          unique = ['foo', 'bar']

   Jinja in Files
       Jinja can be used in the same way in managed files:

          # redis.sls
          /etc/redis/redis.conf:
              file.managed:
                  - source: salt://redis.conf
                  - template: jinja
                  - context:
                      bind: 127.0.0.1

          # lib.sls
          {% set port = 6379 %}

          # redis.conf
          {% from 'lib.sls' import port with context %}
          port {{ port }}
          bind {{ bind }}

       As  an  example,  configuration  was  pulled  from  the  file context and from an external
       template file.

       NOTE:
          Macros and variables can be shared across templates. They should not be  starting  with
          one  or  more  underscores,  and should be managed by one of the following tags: macro,
          set, load_yaml, load_json, import_yaml and import_json.

   Escaping Jinja
       Occasionally, it may be necessary to escape Jinja syntax. There are two ways to do this in
       Jinja.  One  is escaping individual variables or strings and the other is to escape entire
       blocks.

       To escape a string commonly used in Jinja syntax such as {{, you  can  use  the  following
       syntax:

          {{ '{{' }}

       For  larger  blocks  that  contain  Jinja syntax that needs to be escaped, you can use raw
       blocks:

          {% raw %}
              some text that contains jinja characters that need to be escaped
          {% endraw %}

       See the Escaping section of Jinja’s documentation to learn more.

       A real-word example of needing to use raw tags to escape a larger block of  code  is  when
       using  file.managed with the contents_pillar option to manage files that contain something
       like consul-template, which shares a syntax subset with Jinja. Raw  blocks  are  necessary
       here  because  the  Jinja  in the pillar would be rendered before the file.managed is ever
       called, so the Jinja syntax must be escaped:

          {% raw %}
          - contents_pillar: |
              job "example-job" {
                <snipped>
                task "example" {
                    driver = "docker"

                    config {
                        image = "docker-registry.service.consul:5000/example-job:{{key "nomad/jobs/example-job/version"}}"
                <snipped>
          {% endraw %}

   Calling Salt Functions
       The Jinja renderer provides a shorthand lookup syntax for the salt dictionary of execution
       function.

       New in version 2014.7.0.

          # The following two function calls are equivalent.
          {{ salt['cmd.run']('whoami') }}
          {{ salt.cmd.run('whoami') }}

   Debugging
       The  show_full_context function can be used to output all variables present in the current
       Jinja context.

       New in version 2014.7.0.

          Context is: {{ show_full_context()|yaml(False) }}

   Logs
       New in version 2017.7.0.

       Yes, in Salt, one is able to debug a complex Jinja template using the logs.  For  example,
       making the call:

          {%- do salt.log.error('testing jinja logging') -%}

       Will insert the following message in the minion logs:

          2017-02-01 01:24:40,728 [salt.module.logmod][ERROR   ][3779] testing jinja logging

   Python Methods
       A  powerful feature of jinja that is only hinted at in the official jinja documentation is
       that you can use the native python methods of  the  variable  type.  Here  is  the  python
       documentation for string methods.

          {% set hostname,domain = grains.id.partition('.')[::2] %}{{ hostname }}

          {% set strings = grains.id.split('-') %}{{ strings[0] }}

   Custom Execution Modules
       Custom  execution  modules  can be used to supplement or replace complex Jinja. Many tasks
       that require complex looping and logic are trivial when using Python in a  Salt  execution
       module. Salt execution modules are easy to write and distribute to Salt minions.

       Functions  in  custom  execution  modules  are  available  in  the  Salt  execution module
       dictionary just like the built-in execution modules:

          {{ salt['my_custom_module.my_custom_function']() }}

       · How to Convert Jinja Logic to an Execution Module

       · Writing Execution Modules

   Custom Jinja filters
       Given that all execution modules are available in  the  Jinja  template,  one  can  easily
       define  a  custom  module  as  in  the  previous  paragraph  and use it as a Jinja filter.
       However, please note that it will not be accessible through the pipe.

       For example, instead of:

          {{ my_variable | my_jinja_filter }}

       The user will need to define my_jinja_filter  function  under  an  extension  module,  say
       my_filters and use as:

          {{ salt.my_filters.my_jinja_filter(my_variable) }}

       The greatest benefit is that you are able to access thousands of existing functions, e.g.:

       · get the DNS AAAA records for a specific address using the dnsutil:

            {{ salt.dnsutil.AAAA('www.google.com') }}

       · retrieve a specific field value from a Redis hash:

            {{ salt.redis.hget('foo_hash', 'bar_field') }}

       · get the routes to 0.0.0.0/0 using the NAPALM route:

            {{ salt.route.show('0.0.0.0/0') }}

   Tutorials Index
   Autoaccept minions from Grains
       New in version 2018.3.0.

       To  automatically  accept  minions based on certain characteristics, e.g. the uuid you can
       specify certain grain values on the salt master. Minions with matching  grains  will  have
       their keys automatically accepted.

       1. Configure the autosign_grains_dir in the master config file:

          autosign_grains_dir: /etc/salt/autosign_grains

       2. Configure the grain values to be accepted

       Place  a  file  named  like the grain in the autosign_grains_dir and write the values that
       should be accepted automatically inside that file. For  example  to  automatically  accept
       minions based on their uuid create a file named /etc/salt/autosign_grains/uuid:

          8f7d68e2-30c5-40c6-b84a-df7e978a03ee
          1d3c5473-1fbc-479e-b0c7-877705a0730f

       The  master  is  now  setup  to  accept  minions  with  either of the two specified uuids.
       Multiple values must always be written into separate lines.  Lines starting with a  #  are
       ignored.

       3. Configure  the  minion  to  send the specific grains to the master in the minion config
          file:

          autosign_grains:
            - uuid

       Now you should be able to start salt-minion and run salt-call  state.apply  or  any  other
       salt commands that require master authentication.

   Salt as a Cloud Controller
       In  Salt 0.14.0, an advanced cloud control system were introduced, allow private cloud vms
       to be managed directly with Salt. This system is generally referred to as Salt Virt.

       The Salt Virt system already exists and is installed within Salt itself, this  means  that
       besides setting up Salt, no additional salt code needs to be deployed.

       NOTE:
          The libvirt python module and the certtool binary are required.

       The  main  goal of Salt Virt is to facilitate a very fast and simple cloud. The cloud that
       can scale and is fully featured. Salt Virt comes with the ability to  set  up  and  manage
       complex virtual machine networking, powerful image and disk management, as well as virtual
       machine migration with and without shared storage.

       This means that Salt Virt can be used to create a cloud from a blade center and a SAN, but
       can  also  create a cloud out of a swarm of Linux Desktops without a single shared storage
       system. Salt Virt can make clouds from truly commodity hardware, but can also stand up the
       power of specialized hardware as well.

   Setting up Hypervisors
       The  first  step to set up the hypervisors involves getting the correct software installed
       and setting up the hypervisor network interfaces.

   Installing Hypervisor Software
       Salt Virt is made to be hypervisor agnostic  but  currently  the  only  fully  implemented
       hypervisor is KVM via libvirt.

       The  required  software for a hypervisor is libvirt and kvm. For advanced features install
       libguestfs or qemu-nbd.

       NOTE:
          Libguestfs and qemu-nbd allow for virtual machine images to be mounted  before  startup
          and get pre-seeded with configurations and a salt minion

       This  sls will set up the needed software for a hypervisor, and run the routines to set up
       the libvirt pki keys.

       NOTE:
          Package names and setup used is Red Hat  specific,  different  package  names  will  be
          required for different platforms

          libvirt:
            pkg.installed: []
            file.managed:
              - name: /etc/sysconfig/libvirtd
              - contents: 'LIBVIRTD_ARGS="--listen"'
              - require:
                - pkg: libvirt
            virt.keys:
              - require:
                - pkg: libvirt
            service.running:
              - name: libvirtd
              - require:
                - pkg: libvirt
                - network: br0
                - libvirt: libvirt
              - watch:
                - file: libvirt

          libvirt-python:
            pkg.installed: []

          libguestfs:
            pkg.installed:
              - pkgs:
                - libguestfs
                - libguestfs-tools

   Hypervisor Network Setup
       The  hypervisors  will need to be running a network bridge to serve up network devices for
       virtual machines, this formula will set up a standard bridge on  a  hypervisor  connecting
       the bridge to eth0:

          eth0:
            network.managed:
              - enabled: True
              - type: eth
              - bridge: br0

          br0:
            network.managed:
              - enabled: True
              - type: bridge
              - proto: dhcp
              - require:
                - network: eth0

   Virtual Machine Network Setup
       Salt Virt comes with a system to model the network interfaces used by the deployed virtual
       machines; by default a single interface is created for the deployed virtual machine and is
       bridged  to  br0.  To  get going with the default networking setup, ensure that the bridge
       interface named br0 exists on the hypervisor and is bridged to an active network device.

       NOTE:
          To use more advanced networking in Salt Virt, read the Salt Virt Networking document:

          Salt Virt Networking

   Libvirt State
       One of the challenges of deploying a libvirt based cloud is the  distribution  of  libvirt
       certificates.  These  certificates  allow for virtual machine migration. Salt comes with a
       system used to auto deploy these certificates.  Salt manages the signing authority key and
       generates  keys  for  libvirt  clients  on  the  master,  signs  them with the certificate
       authority and uses pillar to distribute them. This  is  managed  via  the  libvirt  state.
       Simply  execute  this formula on the minion to ensure that the certificate is in place and
       up to date:

       NOTE:
          The above formula includes the calls needed to set up libvirt keys.

          libvirt_keys:
            virt.keys

   Getting Virtual Machine Images Ready
       Salt Virt, requires that virtual machine images be provided as these are not generated  on
       the  fly.  Generating these virtual machine images differs greatly based on the underlying
       platform.

       Virtual machine images  can  be  manually  created  using  KVM  and  running  through  the
       installer,  but  this  process  is  not  recommended  since it is very manual and prone to
       errors.

       Virtual Machine generation applications are available for many platforms:

       kiwi: (openSUSE, SLES, RHEL, CentOS)
              https://suse.github.io/kiwi/

       vm-builder:
              https://wiki.debian.org/VMBuilder

              SEE ALSO:
                 vmbuilder-formula

       Once virtual machine images are available, the easiest way to make them available to  Salt
       Virt  is  to  place them in the Salt file server. Just copy an image into /srv/salt and it
       can now be used by Salt Virt.

       For purposes of this demo, the file name centos.img will be used.

   Existing Virtual Machine Images
       Many existing Linux distributions distribute virtual machine images which can be used with
       Salt Virt. Please be advised that NONE OF THESE IMAGES ARE SUPPORTED BY SALTSTACK.

   CentOS
       These  images  have  been  prepared for OpenNebula but should work without issue with Salt
       Virt, only the raw qcow image file is needed: http://wiki.centos.org/Cloud/OpenNebula

   Fedora Linux
       Images for Fedora Linux can be found here: http://fedoraproject.org/en/get-fedora#clouds

   openSUSE
       http://download.opensuse.org/repositories/openSUSE:/Leap:/42.1:/Images/images

       (look for JeOS-for-kvm-and-xen variant)

   SUSE
       https://www.suse.com/products/server/jeos

   Ubuntu Linux
       Images for Ubuntu Linux can be found here: http://cloud-images.ubuntu.com/

   Using Salt Virt
       With hypervisors set up and virtual machine images ready, Salt  can  start  issuing  cloud
       commands using the virt runner.

       Start by running a Salt Virt hypervisor info command:

          salt-run virt.host_info

       This will query the running hypervisor(s) for stats and display useful information such as
       the number of cpus and amount of memory.

       You can also list all VMs and their current states on all hypervisor nodes:

          salt-run virt.list

       Now that hypervisors are available a virtual machine can be  provisioned.   The  virt.init
       routine will create a new virtual machine:

          salt-run virt.init centos1 2 512 salt://centos.img

       The  Salt Virt runner will now automatically select a hypervisor to deploy the new virtual
       machine on. Using salt:// assumes that the CentOS virtual machine image is located in  the
       root of the file-server on the master.  When images are cloned (i.e. copied locatlly after
       retrieval from the file server) the destination directory  on  the  hypervisor  minion  is
       determined by the virt.images config option; by default this is /srv/salt/salt-images/.

       When  a  VM  is  initialized  using  virt.init the image is copied to the hypervisor using
       cp.cache_file and will be mounted and seeded  with  a  minion.  Seeding  includes  setting
       pre-authenticated  keys on the new machine. A minion will only be installed if one can not
       be found on the image using the default arguments to seed.apply.

       NOTE:
          The biggest bottleneck in starting VMs is when the Salt Minion needs to  be  installed.
          Making sure that the source VM images already have Salt installed will GREATLY speed up
          virtual machine deployment.

       You can also deploy an image on a particular minion by directly calling the virt execution
       module with an absolute image path. This can be quite handy for testing:

          salt 'hypervisor*' virt.init centos1 2 512 image=/var/lib/libvirt/images/centos.img

       Now that the new VM has been prepared, it can be seen via the virt.query command:

          salt-run virt.query

       This  command  will  return  data  about  all  of  the  hypervisors and respective virtual
       machines.

       Now that the new VM is booted it should have contacted the Salt Master, a  test.ping  will
       reveal if the new VM is running.

   QEMU copy on write support
       For  fast image cloning you can use the qcow disk image format.  Pass the enable_qcow flag
       and a .qcow2 image path to virt.init:

          salt 'hypervisor*' virt.init centos1 2 512 image=/var/lib/libvirt/images/centos.qcow2 enable_qcow=True start=False

       NOTE:
          Beware that attempting to boot a qcow image too quickly after cloning can result  in  a
          race  condition  where  libvirt  may  try  to boot the machine before image seeding has
          completed. For that reason it is recommended to also pass start=False to virt.init.

          Also know that you must not modify the original base image without first making a  copy
          and then rebasing all overlay images onto it.  See the qemu-img rebase usage docs.

   Migrating Virtual Machines
       Salt  Virt  comes  with  full support for virtual machine migration, and using the libvirt
       state in the above formula makes migration possible.

       A few things need to be available to support migration. Many  operating  systems  turn  on
       firewalls  when originally set up, the firewall needs to be opened up to allow for libvirt
       and kvm  to  cross  communicate  and  execution  migration  routines.  On  Red  Hat  based
       hypervisors in particular port 16514 needs to be opened on hypervisors:

          iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 16514 -j ACCEPT

       NOTE:
          More  in-depth  information  regarding distribution specific firewall settings can read
          in:

          Opening the Firewall up for Salt

       Salt also needs the virt.tunnel option to be turned on.   This  flag  tells  Salt  to  run
       migrations  securely via the libvirt TLS tunnel and to use port 16514. Without virt.tunnel
       libvirt tries to bind to random ports when running migrations.

       To turn on virt.tunnel simple apply it to the master config file:

          virt.tunnel: True

       Once the master config has been updated, restart the master and send out  a  call  to  the
       minions to refresh the pillar to pick up on the change:

          salt \* saltutil.refresh_modules

       Now,  migration  routines  can  be  run! To migrate a VM, simply run the Salt Virt migrate
       routine:

          salt-run virt.migrate centos <new hypervisor>

   VNC Consoles
       Although not enabled by default, Salt Virt can also  set  up  VNC  consoles  allowing  for
       remote  visual  consoles  to be opened up. When creating a new VM using virt.init pass the
       enable_vnc=True parameter to have a console configured for the new VM.

       The information from a virt.query routine will  display  the  vnc  console  port  for  the
       specific vms:

          centos
            CPU: 2
            Memory: 524288
            State: running
            Graphics: vnc - hyper6:5900
            Disk - vda:
              Size: 2.0G
              File: /srv/salt-images/ubuntu2/system.qcow2
              File Format: qcow2
            Nic - ac:de:48:98:08:77:
              Source: br0
              Type: bridge

       The  line  Graphics:  vnc  - hyper6:5900 holds the key. First the port named, in this case
       5900, will need to be available in the hypervisor’s firewall.  Once the port is open, then
       the console can be easily opened via vncviewer:

          vncviewer hyper6:5900

       By  default  there  is  no VNC security set up on these ports, which suggests that keeping
       them firewalled and mandating that SSH tunnels be used to  access  these  VNC  interfaces.
       Keep  in mind that activity on a VNC interface that is accessed can be viewed by any other
       user that accesses that same VNC interface, and any other user logging in can also operate
       with the logged in user on the virtual machine.

   Conclusion
       Now  with  Salt  Virt running, new hypervisors can be seamlessly added just by running the
       above states on new bare metal machines, and these machines will be instantly available to
       Salt Virt.

   Running Salt States and Commands in Docker Containers
       The  2016.11.0  release  of  Salt  introduces  the ability to execute Salt States and Salt
       remote execution commands directly inside of Docker containers.

       This addition makes it possible to not only deploy fresh  containers  using  Salt  States.
       This also allows for running containers to be audited and modified using Salt, but without
       running a Salt Minion inside the container.  Some of  the  applications  include  security
       audits of running containers as well as gathering operating data from containers.

       This new feature is simple and straightforward, and can be used via a running Salt Minion,
       the Salt Call command, or via Salt SSH. For  this  tutorial  we  will  use  the  salt-call
       command,  but  like  all  salt  commands these calls are directly translatable to salt and
       salt-ssh.

   Step 1 - Install Docker
       Since setting up Docker is well covered in the Docker documentation we will make  no  such
       effort  to  describe  it  here.  Please  see  the  Docker  Installation  Documentation for
       installing and setting up Docker: https://docs.docker.com/engine/installation/

       The Docker integration also requires that the docker-py library is  installed.   This  can
       easily be done using