SIMPLE SOLUTIONS

SALT(7) - man page online | overview, conventions, and miscellany

Salt Documentation.

Chapter
April 04, 2016
SALT(7)                                        Salt                                       SALT(7)

NAME salt - Salt Documentation
INTRODUCTION TO SALT We’re not just talking about NaCl..SS The 30 second summary Salt is: · a configuration management system, capable of maintaining remote nodes in defined states (for example, ensuring that specific packages are installed and specific services are running) · a distributed remote execution system used to execute commands and query data on remote nodes, either individually or by arbitrary selection criteria It was developed in order to bring the best solutions found in the world of remote execu‐ tion together and make them better, faster, and more malleable. Salt accomplishes this through its ability to handle large loads of information, and not just dozens but hundreds and even thousands of individual servers quickly through a simple and manageable inter‐ face. Simplicity Providing versatility between massive scale deployments and smaller systems may seem daunting, but Salt is very simple to set up and maintain, regardless of the size of the project. The architecture of Salt is designed to work with any number of servers, from a handful of local network systems to international deployments across different data cen‐ ters. The topology is a simple server/client model with the needed functionality built into a single set of daemons. While the default configuration will work with little to no modification, Salt can be fine tuned to meet specific needs. Parallel execution The core functions of Salt: · enable commands to remote systems to be called in parallel rather than serially · use a secure and encrypted protocol · use the smallest and fastest network payloads possible · provide a simple programming interface Salt also introduces more granular controls to the realm of remote execution, allowing systems to be targeted not just by hostname, but also by system properties. Building on proven technology Salt takes advantage of a number of technologies and techniques. The networking layer is built with the excellent ZeroMQ networking library, so the Salt daemon includes a viable and transparent AMQ broker. Salt uses public keys for authentication with the master dae‐ mon, then uses faster AES encryption for payload communication; authentication and encryp‐ tion are integral to Salt. Salt takes advantage of communication via msgpack, enabling fast and light network traffic. Python client interface In order to allow for simple expansion, Salt execution routines can be written as plain Python modules. The data collected from Salt executions can be sent back to the master server, or to any arbitrary program. Salt can be called from a simple Python API, or from the command line, so that Salt can be used to execute one-off commands as well as operate as an integral part of a larger application. Fast, flexible, scalable The result is a system that can execute commands at high speed on target server groups ranging from one to very many servers. Salt is very fast, easy to set up, amazingly mal‐ leable and provides a single remote execution architecture that can manage the diverse requirements of any number of servers. The Salt infrastructure brings together the best of the remote execution world, amplifies its capabilities and expands its range, resulting in a system that is as versatile as it is practical, suitable for any network. Open Salt is developed under the Apache 2.0 license, and can be used for open and proprietary projects. Please submit your expansions back to the Salt project so that we can all bene‐ fit together as Salt grows. Please feel free to sprinkle Salt around your systems and let the deliciousness come forth. Salt Community Join the Salt! There are many ways to participate in and communicate with the Salt community. Salt has an active IRC channel and a mailing list. Mailing List Join the salt-users mailing list. It is the best place to ask questions about Salt and see whats going on with Salt development! The Salt mailing list is hosted by Google Groups. It is open to new members. https://groups.google.com/forum/#!forum/salt-users There is also a low-traffic list used to announce new releases called salt-announce https://groups.google.com/forum/#!forum/salt-announce IRC The #salt IRC channel is hosted on the popular Freenode network. You can use the Freenode webchat client right from your browser. Logs of the IRC channel activity are being collected courtesy of Moritz Lenz. If you wish to discuss the development of Salt itself join us in #salt-devel. Follow on GitHub The Salt code is developed via GitHub. Follow Salt for constant updates on what is happen‐ ing in Salt development: https://github.com/saltstack/salt Blogs SaltStack Inc. keeps a blog with recent news and advancements: http://www.saltstack.com/blog/ Thomas Hatch also shares news and thoughts on Salt and related projects in his personal blog The Red45: http://red45.wordpress.com/ Example Salt States The official salt-states repository is: https://github.com/saltstack/salt-states A few examples of salt states from the community: · https://github.com/blast-hardcheese/blast-salt-states · https://github.com/kevingranade/kevingranade-salt-state · https://github.com/mattmcclean/salt-openstack/tree/master/salt · https://github.com/rentalita/ubuntu-setup/ · https://github.com/brutasse/states · https://github.com/bclermont/states · https://github.com/pcrews/salt-data Follow on ohloh https://www.ohloh.net/p/salt Other community links · Salt Stack Inc. · Subreddit · Google+ · YouTube · Facebook · Twitter · Wikipedia page Hack the Source If you want to get involved with the development of source code or the documentation efforts, please review the hacking section!
INSTALLATION SEE ALSO: Installing Salt for development and contributing to the project. Quick Install On most distributions, you can set up a Salt Minion with the Salt Bootstrap. Platform-specific Installation Instructions These guides go into detail how to install Salt on a given platform. Arch Linux Installation Salt (stable) is currently available via the Arch Linux Official repositories. There are currently -git packages available in the Arch User repositories (AUR) as well. Stable Release Install Salt stable releases from the Arch Linux Official repositories as follows: pacman -S salt-zmq To install Salt stable releases using the RAET protocol, use the following: pacman -S salt-raet NOTE: transports Unlike other linux distributions, please be aware that Arch Linux's package manager pacman defaults to RAET as the Salt transport. If you want to use ZeroMQ instead, make sure to enter the associated number for the salt-zmq repository when prompted. Tracking develop To install the bleeding edge version of Salt (may include bugs!), use the -git package. Installing the -git package as follows: wget https://aur.archlinux.org/packages/sa/salt-git/salt-git.tar.gz tar xf salt-git.tar.gz cd salt-git/ makepkg -is NOTE: yaourt If a tool such as Yaourt is used, the dependencies will be gathered and built automati‐ cally. The command to install salt using the yaourt tool is: yaourt salt-git Post-installation tasks systemd Activate the Salt Master and/or Minion via systemctl as follows: systemctl enable salt-master.service systemctl enable salt-minion.service Start the Master Once you've completed all of these steps you're ready to start your Salt Master. You should be able to start your Salt Master now using the command seen here: systemctl start salt-master Now go to the Configuring Salt page. Debian GNU/Linux / Raspbian Debian GNU/Linux distribution and some devariatives such as Raspbian already have included Salt packages to their repositories. However, current stable release codenamed "Jessie" contains old outdated Salt release. It is recommended to use SaltStack repository for Debian as described below. Installation from official Debian and Raspbian repositories is described here. Installation from the SaltStack Repository 2015.5 and later packages for Debian 8 ("Jessie") are available in the SaltStack reposi‐ tory. NOTE: SaltStack repository contains only packages suitable for i386 (32-bit Intel-compatible CPUs) and amd64 (64-bit) architectures. While Salt packages are built for all Debian ports (have all suffix in package names), some of the dependencies are avaivable only for amd64 systems. IMPORTANT: The repository folder structure changed in the 2015.8.3 release, though the previous repository structure that was documented in 2015.8.1 can continue to be used. To install using the SaltStack repository: 1. Run the following command to import the SaltStack repository key: wget -O - https://repo.saltstack.com/apt/debian/8/amd64/latest/SALTSTACK-GPG-KEY.pub | ↲ sudo apt-key add - 2. Add the following line to /etc/apt/sources.list: deb http://repo.saltstack.com/apt/debian/8/amd64/latest jessie main 3. Run sudo apt-get update. 4. Now go to the packages installation section. Installation from the Community Repository The SaltStack community maintains a Debian repository at debian.saltstack.com. Packages for Debian Old Stable, Stable, and Unstable (Wheezy, Jessie, and Sid) for Salt 0.16 and later are published in this repository. NOTE: Packages in this repository are community built, and it can take a little while until the latest SaltStack release is available in this repository. Jessie (Stable) For Jessie, the following line is needed in either /etc/apt/sources.list or a file in /etc/apt/sources.list.d: deb http://debian.saltstack.com/debian jessie-saltstack main Wheezy (Old Stable) For Wheezy, the following line is needed in either /etc/apt/sources.list or a file in /etc/apt/sources.list.d: deb http://debian.saltstack.com/debian wheezy-saltstack main Squeeze (Old Old Stable) For Squeeze, you will need to enable the Debian backports repository as well as the debian.saltstack.com repository. To do so, add the following to /etc/apt/sources.list or a file in /etc/apt/sources.list.d: deb http://debian.saltstack.com/debian squeeze-saltstack main deb http://backports.debian.org/debian-backports squeeze-backports main Stretch (Testing) For Stretch, the following line is needed in either /etc/apt/sources.list or a file in /etc/apt/sources.list.d: deb http://debian.saltstack.com/debian stretch-saltstack main Sid (Unstable) For Sid, the following line is needed in either /etc/apt/sources.list or a file in /etc/apt/sources.list.d: deb http://debian.saltstack.com/debian unstable main Import the repository key You will need to import the key used for signing. wget -q -O- "http://debian.saltstack.com/debian-salt-team-joehealy.gpg.key" | apt-key add ↲ - NOTE: You can optionally verify the key integrity with sha512sum using the public key signa‐ ture shown here. E.g: echo "b702969447140d5553e31e9701be13ca11cc0a7ed5fe2b30acb8491567560ee62f834772b5095d73 ↲ 5dfcecb2384a5c1a20045f52861c417f50b68dd5ff4660e6 debian-salt-team-joehealy.gpg.key" | sha512sum -c Update the package database apt-get update Installation from the Debian / Raspbian Official Repository Stretch (Testing) and Sid (Unstable) distributions are already contain mostly up-to-date Salt packages built by Debian Salt Team. You can install Salt components directly from Debian. On Jessie (Stable) there is an option to install Salt minion from Stretch with python-tor‐ nado dependency from jessie-backports repositories. To install fresh release of Salt minion on Jessie: 1. Add jessie-backports and stretch repositories: Debian: echo 'deb http://httpredir.debian.org/debian jessie-backports main' >> /etc/apt/sources ↲ .list echo 'deb http://httpredir.debian.org/debian stretch main' >> /etc/apt/sources.list Raspbian: echo 'deb http://archive.raspbian.org/raspbian/ stretch main' >> /etc/apt/sources.list 2. Make Jessie a default release: echo 'APT::Default-Release "jessie";' > /etc/apt/apt.conf.d/10apt 3. Install Salt dependencies: Debian: apt-get update apt-get install python-zmq python-tornado/jessie-backports salt-common/stretch Raspbian: apt-get update apt-get install python-zmq python-tornado/stretch salt-common/stretch 4. Install Salt minion package from Stretch: apt-get install salt-minion/stretch Install Packages Install the Salt master, minion or other packages from the repository with the apt-get command. These examples each install one of Salt components, but more than one package name may be given at a time: · apt-get install salt-api · apt-get install salt-cloud · apt-get install salt-master · apt-get install salt-minion · apt-get install salt-ssh · apt-get install salt-syndic Post-installation tasks Now, go to the Configuring Salt page. Fedora Beginning with version 0.9.4, Salt has been available in the primary Fedora repositories and EPEL. It is installable using yum. Fedora will have more up to date versions of Salt than other members of the Red Hat family, which makes it a great place to help improve Salt! WARNING: Fedora 19 comes with systemd 204. Systemd has known bugs fixed in later revi‐ sions that prevent the salt-master from starting reliably or opening the network connec‐ tions that it needs to. It's not likely that a salt-master will start or run reliably on any distribution that uses systemd version 204 or earlier. Running salt-minions should be OK. Installation Salt can be installed using yum and is available in the standard Fedora repositories. Stable Release Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions. yum install salt-master yum install salt-minion Installing from updates-testing When a new Salt release is packaged, it is first admitted into the updates-testing reposi‐ tory, before being moved to the stable repo. To install from updates-testing, use the enablerepo argument for yum: yum --enablerepo=updates-testing install salt-master yum --enablerepo=updates-testing install salt-minion Installation Using pip Since Salt is on PyPI, it can be installed using pip, though most users prefer to install using a package manager. Installing from pip has a few additional requirements: · Install the group 'Development Tools', dnf groupinstall 'Development Tools' · Install the 'zeromq-devel' package if it fails on linking against that afterwards as well. A pip install does not make the init scripts or the /etc/salt directory, and you will need to provide your own systemd service unit. Installation from pip: pip install salt WARNING: If installing from pip (or from source using setup.py install), be advised that the yum-utils package is needed for Salt to manage packages. Also, if the Python dependen‐ cies are not already installed, then you will need additional libraries/tools installed to build some of them. More information on this can be found here. Post-installation tasks Master To have the Master start automatically at boot time: systemctl enable salt-master.service To start the Master: systemctl start salt-master.service Minion To have the Minion start automatically at boot time: systemctl enable salt-minion.service To start the Minion: systemctl start salt-minion.service Now go to the Configuring Salt page. FreeBSD Salt was added to the FreeBSD ports tree Dec 26th, 2011 by Christer Edwards <‐ @gmail.com>. It has been tested on FreeBSD 7.4, 8.2, 9.0, 9.1, 10.0 and later releases. Installation Salt is available in binary package form from both the FreeBSD pkgng repository or directly from SaltStack. The instructions below outline installation via both methods: FreeBSD repo The FreeBSD pkgng repository is preconfigured on systems 10.x and above. No configuration is needed to pull from these repositories. pkg install py27-salt These packages are usually available within a few days of upstream release. SaltStack repo SaltStack also hosts internal binary builds of the Salt package, available from https://repo.saltstack.com/freebsd/. To make use of this repository, add the following file to your system: /usr/local/etc/pkg/repos/saltstack.conf: saltstack: { url: "https://repo.saltstack.com/freebsd/${ABI}/", mirror_type: "http", enabled: yes priority: 10 } You should now be able to install Salt from this new repository: pkg install py27-salt These packages are usually available earlier than upstream FreeBSD. Also available are release candidates and development releases. Use these pre-release packages with caution. Post-installation tasks Master Copy the sample configuration file: cp /usr/local/etc/salt/master.sample /usr/local/etc/salt/master rc.conf Activate the Salt Master in /etc/rc.conf: sysrc salt_master_enable="YES" Start the Master Start the Salt Master as follows: service salt_master start Minion Copy the sample configuration file: cp /usr/local/etc/salt/minion.sample /usr/local/etc/salt/minion rc.conf Activate the Salt Minion in /etc/rc.conf: sysrc salt_minion_enable="YES" Start the Minion Start the Salt Minion as follows: service salt_minion start Now go to the Configuring Salt page. Gentoo Salt can be easily installed on Gentoo via Portage: emerge app-admin/salt Post-installation tasks Now go to the Configuring Salt page. OpenBSD Salt was added to the OpenBSD ports tree on Aug 10th 2013. It has been tested on OpenBSD 5.5 onwards. Salt is dependent on the following additional ports. These will be installed as dependen‐ cies of the sysutils/salt port: devel/py-futures devel/py-progressbar net/py-msgpack net/py-zmq security/py-crypto security/py-M2Crypto textproc/py-MarkupSafe textproc/py-yaml www/py-jinja2 www/py-requests www/py-tornado Installation To install Salt from the OpenBSD pkg repo, use the command: pkg_add salt Post-installation tasks Master To have the Master start automatically at boot time: rcctl enable salt_master To start the Master: rcctl start salt_master Minion To have the Minion start automatically at boot time: rcctl enable salt_minion To start the Minion: rcctl start salt_minion Now go to the Configuring Salt page. OS X Dependency Installation It should be noted that Homebrew explicitly discourages the use of sudo: Homebrew is designed to work without using sudo. You can decide to use it but we strongly recommend not to do so. If you have used sudo and run into a bug then it is likely to be the cause. Please don’t file a bug report unless you can reproduce it after reinstalling Homebrew from scratch without using sudo So when using Homebrew, if you want support from the Homebrew community, install this way: brew install saltstack When using MacPorts, install this way: sudo port install salt When only using the OS X system's pip, install this way: sudo pip install salt Salt-Master Customizations To run salt-master on OS X, the root user maxfiles limit must be increased: NOTE: On OS X 10.10 (Yosemite) and higher, maxfiles should not be adjusted. The default lim‐ its are sufficient in all but the most extreme scenarios. Overriding these values with the setting below will cause system instability! sudo launchctl limit maxfiles 4096 8192 And sudo add this configuration option to the /etc/salt/master file: max_open_files: 8192 Now the salt-master should run without errors: sudo salt-master --log-level=all Post-installation tasks Now go to the Configuring Salt page. RHEL / CentOS / Scientific Linux / Amazon Linux / Oracle Linux Salt should work properly with all mainstream derivatives of Red Hat Enterprise Linux, including CentOS, Scientific Linux, Oracle Linux, and Amazon Linux. Report any bugs or issues on the issue tracker. Installation from the SaltStack Repository 2015.5 and later packages for RHEL 5, 6, and 7 are available in the SaltStack repository. IMPORTANT: The repository folder structure changed in the 2015.8.3 release, though the previous repository structure that was documented in 2015.8.1 can continue to be used. To install using the SaltStack repository: 1. Run one of the following commands based on your version to import the SaltStack reposi‐ tory key: Version 7: rpm --import https://repo.saltstack.com/yum/redhat/7/x86_64/latest/SALTSTACK-GPG-KEY.pu ↲ b Version 6: rpm --import https://repo.saltstack.com/yum/redhat/6/x86_64/latest/SALTSTACK-GPG-KEY.pu ↲ b Version 5: wget https://repo.saltstack.com/yum/redhat/5/x86_64/latest/SALTSTACK-EL5-GPG-KEY.pub rpm --import SALTSTACK-EL5-GPG-KEY.pub rm -f SALTSTACK-EL5-GPG-KEY.pub 2. Save the following file to /etc/yum.repos.d/saltstack.repo: Version 7 and 6: [saltstack-repo] name=SaltStack repo for RHEL/CentOS $releasever baseurl=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest enabled=1 gpgcheck=1 gpgkey=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-GPG ↲ -KEY.pub Version 5: [saltstack-repo] name=SaltStack repo for RHEL/CentOS $releasever baseurl=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest enabled=1 gpgcheck=1 gpgkey=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-EL5 ↲ -GPG-KEY.pub 3. Run sudo yum clean expire-cache. 4. Run sudo yum update. 5. Install the salt-minion, salt-master, or other Salt components: · yum install salt-master · yum install salt-minion · yum install salt-ssh · yum install salt-syndic · yum install salt-cloud NOTE: As of 2015.8.0, EPEL repository is no longer required for installing on RHEL systems. SaltStack repository provides all needed dependencies. WARNING: If installing on Red Hat Enterprise Linux 7 with disabled (not subscribed on) 'RHEL Server Releases' or 'RHEL Server Optional Channel' repositories, append CentOS 7 GPG key URL to SaltStack yum repository configuration to install required base packages: [saltstack-repo] name=SaltStack repo for Red Hat Enterprise Linux $releasever baseurl=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest enabled=1 gpgcheck=1 gpgkey=https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/SALTSTACK-GP ↲ G-KEY.pub https://repo.saltstack.com/yum/redhat/$releasever/$basearch/latest/base/RPM-GPG ↲ -KEY-CentOS-7 NOTE: systemd and python-systemd are required by Salt, but are not installed by the Red Hat 7 @base installation or by the Salt installation. These dependencies might need to be installed before Salt. Installation from the Community Repository Beginning with version 0.9.4, Salt has been available in EPEL. For RHEL/CentOS 5, Fedora COPR is a single community repository that provides Salt packages due to the removal from EPEL5. NOTE: Packages in these repositories are built by community, and it can take a little while until the latest stable SaltStack release become available. RHEL/CentOS 6 and 7, Scientific Linux, etc. WARNING: Salt 2015.8 is currently not available in EPEL due to unsatisfied dependencies: python-crypto 2.6.1 or higher, and python-tornado version 4.2.1 or higher. These pack‐ ages are not currently available in EPEL for Red Hat Enterprise Linux 6 and 7. Enabling EPEL If the EPEL repository is not installed on your system, you can download the RPM for RHEL/CentOS 6 or for RHEL/CentOS 7 and install it using the following command: rpm -Uvh epel-release-X-Y.rpm Replace epel-release-X-Y.rpm with the appropriate filename. Installing Stable Release Salt is packaged separately for the minion and the master. It is necessary to install only the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions. · yum install salt-master · yum install salt-minion · yum install salt-ssh · yum install salt-syndic · yum install salt-cloud Installing from epel-testing When a new Salt release is packaged, it is first admitted into the epel-testing reposi‐ tory, before being moved to the stable EPEL repository. To install from epel-testing, use the enablerepo argument for yum: yum --enablerepo=epel-testing install salt-minion Installation Using pip Since Salt is on PyPI, it can be installed using pip, though most users prefer to install using RPM packages (which can be installed from EPEL). Installing from pip has a few additional requirements: · Install the group 'Development Tools', yum groupinstall 'Development Tools' · Install the 'zeromq-devel' package if it fails on linking against that afterwards as well. A pip install does not make the init scripts or the /etc/salt directory, and you will need to provide your own systemd service unit. Installation from pip: pip install salt WARNING: If installing from pip (or from source using setup.py install), be advised that the yum-utils package is needed for Salt to manage packages. Also, if the Python dependen‐ cies are not already installed, then you will need additional libraries/tools installed to build some of them. More information on this can be found here. ZeroMQ 4 We recommend using ZeroMQ 4 where available. SaltStack provides ZeroMQ 4.0.5 and pyzmq 14.5.0 in the SaltStack Repository as well as a separate zeromq4 COPR repository. If this repository is added before Salt is installed, then installing either salt-master or salt-minion will automatically pull in ZeroMQ 4.0.5, and additional steps to upgrade ZeroMQ and pyzmq are unnecessary. WARNING: RHEL/CentOS 5 Users Using COPR repos on RHEL/CentOS 5 requires that the python-hashlib package be installed. Not having it present will result in checksum errors because YUM will not be able to process the SHA256 checksums used by COPR. NOTE: For RHEL/CentOS 5 installations, if using the SaltStack repo or Fedora COPR to install Salt (as described above), then it is not necessary to enable the zeromq4 COPR, because those repositories already include ZeroMQ 4. Package Management Salt's interface to yum makes heavy use of the repoquery utility, from the yum-utils pack‐ age. This package will be installed as a dependency if salt is installed via EPEL. How‐ ever, if salt has been installed using pip, or a host is being managed using salt-ssh, then as of version 2014.7.0 yum-utils will be installed automatically to satisfy this dependency. Post-installation tasks Master To have the Master start automatically at boot time: RHEL/CentOS 5 and 6 chkconfig salt-master on RHEL/CentOS 7 systemctl enable salt-master.service To start the Master: RHEL/CentOS 5 and 6 service salt-master start RHEL/CentOS 7 systemctl start salt-master.service Minion To have the Minion start automatically at boot time: RHEL/CentOS 5 and 6 chkconfig salt-minion on RHEL/CentOS 7 systemctl enable salt-minion.service To start the Minion: RHEL/CentOS 5 and 6 service salt-minion start RHEL/CentOS 7 systemctl start salt-minion.service Now go to the Configuring Salt page. Solaris Salt was added to the OpenCSW package repository in September of 2012 by Romeo Theriault <@hawaii.edu> at version 0.10.2 of Salt. It has mainly been tested on Solaris 10 (sparc), though it is built for and has been tested minimally on Solaris 10 (x86), Solaris 9 (sparc/x86) and 11 (sparc/x86). (Please let me know if you're using it on these plat‐ forms!) Most of the testing has also just focused on the minion, though it has verified that the master starts up successfully on Solaris 10. Comments and patches for better support on these platforms is very welcome. As of version 0.10.4, Solaris is well supported under salt, with all of the following working well: 1. remote execution 2. grain detection 3. service control with SMF 4. 'pkg' states with 'pkgadd' and 'pkgutil' modules 5. cron modules/states 6. user and group modules/states 7. shadow password management modules/states Salt is dependent on the following additional packages. These will automatically be installed as dependencies of the py_salt package: · py_yaml · py_pyzmq · py_jinja2 · py_msgpack_python · py_m2crypto · py_crypto · python Installation To install Salt from the OpenCSW package repository you first need to install pkgutil assuming you don't already have it installed: On Solaris 10: pkgadd -d http://get.opencsw.org/now On Solaris 9: wget http://mirror.opencsw.org/opencsw/pkgutil.pkg pkgadd -d pkgutil.pkg all Once pkgutil is installed you'll need to edit it's config file /etc/opt/csw/pkgutil.conf to point it at the unstable catalog: - #mirror=http://mirror.opencsw.org/opencsw/testing + mirror=http://mirror.opencsw.org/opencsw/unstable OK, time to install salt. # Update the catalog root> /opt/csw/bin/pkgutil -U # Install salt root> /opt/csw/bin/pkgutil -i -y py_salt Minion Configuration Now that salt is installed you can find it's configuration files in /etc/opt/csw/salt/. You'll want to edit the minion config file to set the name of your salt master server: - #master: salt + master: your-salt-server If you would like to use pkgutil as the default package provider for your Solaris minions, you can do so using the providers option in the minion config file. You can now start the salt minion like so: On Solaris 10: svcadm enable salt-minion On Solaris 9: /etc/init.d/salt-minion start You should now be able to log onto the salt master and check to see if the salt-minion key is awaiting acceptance: salt-key -l un Accept the key: salt-key -a <your-salt-minion> Run a simple test against the minion: salt '<your-salt-minion>' test.ping Troubleshooting Logs are in /var/log/salt Ubuntu Installation from the SaltStack Repository 2015.5 and later packages for Ubuntu 14 (Trusty) and Ubuntu 12 (Precise) are available in the SaltStack repository. NOTE: While Salt packages are built for all Ubuntu supported CPU architectures (i386 and amd64), some of the dependencies avaivable from SaltStack corporate repository are only suitable for amd64 systems. IMPORTANT: The repository folder structure changed in the 2015.8.3 release, though the previous repository structure that was documented in 2015.8.1 can continue to be used. To install using the SaltStack repository: 1. Run the following command to import the SaltStack repository key: Ubuntu 14: wget -O - https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pu ↲ b | sudo apt-key add - Ubuntu 12: wget -O - https://repo.saltstack.com/apt/ubuntu/12.04/amd64/latest/SALTSTACK-GPG-KEY.pu ↲ b | sudo apt-key add - 2. Add the following line to /etc/apt/sources.list: Ubuntu 14: deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty main Ubuntu 12: deb http://repo.saltstack.com/apt/ubuntu/12.04/amd64/latest precise main 3. Run sudo apt-get update. 4. Now go to the packages installation section. Installation from the Community Repository Packages for Ubuntu are also published in the saltstack PPA. If you have the add-apt-repository utility, you can add the repository and import the key in one step: sudo add-apt-repository ppa:saltstack/salt In addition to the main repository, there are secondary repositories for each individual major release. These repositories receive security and point releases but will not upgrade to any subsequent major release. There are currently several available repos: salt16, salt17, salt2014-1, salt2014-7, salt2015-5. For example to follow 2015.5.x releases: sudo add-apt-repository ppa:saltstack/salt2015-5 add-apt-repository: command not found? The add-apt-repository command is not always present on Ubuntu systems. This can be fixed by installing python-software-properties: sudo apt-get install python-software-properties The following may be required as well: sudo apt-get install software-properties-common Note that since Ubuntu 12.10 (Raring Ringtail), add-apt-repository is found in the software-properties-common package, and is part of the base install. Thus, add-apt-repository should be able to be used out-of-the-box to add the PPA. Alternately, manually add the repository and import the PPA key with these commands: echo deb http://ppa.launchpad.net/saltstack/salt/ubuntu `lsb_release -sc` main | sudo tee ↲ /etc/apt/sources.list.d/saltstack.list wget -q -O- "http://keyserver.ubuntu.com:11371/pks/lookup?op=get&search=0x4759FA960E27C0A6 ↲ " | sudo apt-key add - After adding the repository, update the package management database: sudo apt-get update Install Packages Install the Salt master, minion or other packages from the repository with the apt-get command. These examples each install one of Salt components, but more than one package name may be given at a time: · apt-get install salt-api · apt-get install salt-cloud · apt-get install salt-master · apt-get install salt-minion · apt-get install salt-ssh · apt-get install salt-syndic Post-installation tasks Now go to the Configuring Salt page. Windows Salt has full support for running the Salt Minion on Windows. There are no plans for the foreseeable future to develop a Salt Master on Windows. For now you must run your Salt Master on a supported operating system to control your Salt Minions on Windows. Many of the standard Salt modules have been ported to work on Windows and many of the Salt States currently work on Windows, as well. Windows Installer Salt Minion Windows installers can be found here. The output of md5sum <salt minion exe> should match the contents of the corresponding md5 file. Latest stable build from the selected branch: Earlier builds from supported branches Archived builds from unsupported branches NOTE: The installation executable installs dependencies that the Salt minion requires. The 64bit installer has been tested on Windows 7 64bit and Windows Server 2008R2 64bit. The 32bit installer has been tested on Windows 2003 Server 32bit. Please file a bug report on our GitHub repo if issues for other platforms are found. The installer asks for 2 bits of information; the master hostname and the minion name. The installer will update the minion config with these options and then start the minion. The salt-minion service will appear in the Windows Service Manager and can be started and stopped there or with the command line program sc like any other Windows service. If the minion won't start, try installing the Microsoft Visual C++ 2008 x64 SP1 redis‐ tributable. Allow all Windows updates to run salt-minion smoothly. Silent Installer Options The installer can be run silently by providing the /S option at the command line. The in‐ staller also accepts the following options for configuring the Salt Minion silently: · /master= A string value to set the IP address or host name of the master. Default value is 'salt' · /minion-name= A string value to set the minion name. Default is 'hostname' · /start-service= Either a 1 or 0. '1' will start the service, '0' will not. Default is to start the service after installation. Here's an example of using the silent installer: Salt-Minion-2015.5.6-Setup-amd64.exe /S /master=yoursaltmaster /minion-name=yourminionname ↲ /start-service=0 Running the Salt Minion on Windows as an Unprivileged User Notes: - These instructions were tested with Windows Server 2008 R2 - They are generaliz‐ able to any version of Windows that supports a salt-minion A. Create the Unprivileged User that the Salt Minion will Run As 1. Click Start > Control Panel > User Accounts. 2. Click Add or remove user accounts. 3. Click Create new account. 4. Enter salt-user (or a name of your preference) in the New account name field. 5. Select the Standard user radio button. 6. Click the Create Account button. 7. Click on the newly created user account. 8. Click the Create a password link. 9. In the New password and Confirm new password fields, provide a password (e.g "Super‐ SecretMinionPassword4Me!"). 10. In the Type a password hint field, provide appropriate text (e.g. "My Salt Password"). 11. Click the Create password button. 12. Close the Change an Account window. B. Add the New User to the Access Control List for the Salt Folder 1. In a File Explorer window, browse to the path where Salt is installed (the default path is C:\Salt). 2. Right-click on the Salt folder and select Properties. 3. Click on the Security tab. 4. Click the Edit button. 5. Click the Add button. 6. Type the name of your designated Salt user and click the OK button. 7. Check the box to Allow the Modify permission. 8. Click the OK button. 9. Click the OK button to close the Salt Properties window. C. Update the Windows Service User for the salt-minion Service 1. Click Start > Administrative Tools > Services. 2. In the Services list, right-click on salt-minion and select Properties. 3. Click the Log On tab. 4. Click the This account radio button. 5. Provide the account credentials created in section A. 6. Click the OK button. 7. Click the OK button to the prompt confirming that the user has been granted the Log On As A Service right. 8. Click the OK button to the prompt confirming that The new logon name will not take effect until you stop and restart the service. 9. Right-Click on salt-minion and select Stop. 10. Right-Click on salt-minion and select Start. Setting up a Windows build environment This document will explain how to set up a development environment for salt on Windows. The development environment allows you to work with the source code to customize or fix bugs. It will also allow you to build your own installation. The Easy Way Prerequisite Software To do this the easy way you only need to install Git for Windows. Create the Build Environment 1. Clone the Salt-Windows-Dev repo from github. Open a command line and type: git clone https://github.com/saltstack/salt-windows-dev 2. Build the Python Environment Go into the salt-windows-dev directory. Right-click the file named dev_env.ps1 and select Run with PowerShell If you get an error, you may need to change the execution policy. Open a powershell window and type the following: Set-ExecutionPolicy RemoteSigned This will download and install Python with all the dependencies needed to develop and build salt. 3. Build the Salt Environment Right-click on the file named dev_env_salt.ps1 and select Run with Powershell This will clone salt into C:\Salt-Dev\salt and set it to the 2015.5 branch. You could optionally run the command from a powershell window with a -Version switch to pull a different version. For example: dev_env_salt.ps1 -Version '2014.7' To view a list of available branches and tags, open a command prompt in your C:Salt-Devsalt directory and type: git branch -a git tag -n The Hard Way Prerequisite Software Install the following software: 1. Git for Windows 2. Nullsoft Installer Download the Prerequisite zip file for your CPU architecture from the SaltStack download site: · Salt32.zip · Salt64.zip These files contain all software required to build and develop salt. Unzip the contents of the file to C:\Salt-Dev\temp. Create the Build Environment 1. Build the Python Environment · Install Python: Browse to the C:\Salt-Dev\temp directory and find the Python installation file for your CPU Architecture under the corresponding subfolder. Double-click the file to install python. Make sure the following are in your PATH environment variable: C:\Python27 C:\Python27\Scripts · Install Pip Open a command prompt and navigate to C:\Salt-Dev\temp Run the following command: python get-pip.py · Easy Install compiled binaries. M2Crypto, PyCrypto, and PyWin32 need to be installed using Easy Install. Open a com‐ mand prompt and navigate to C:\Salt-Dev\temp\<cpuarch>. Run the following commands: easy_install -Z <M2Crypto file name> easy_install -Z <PyCrypto file name> easy_install -Z <PyWin32 file name> NOTE: You can type the first part of the file name and then press the tab key to auto-complete the name of the file. · Pip Install Additional Prerequisites All remaining prerequisites need to be pip installed. These prerequisites are as fol‐ low: · MarkupSafe · Jinja · MsgPack · PSUtil · PyYAML · PyZMQ · WMI · Requests · Certifi Open a command prompt and navigate to C:\Salt-Dev\temp. Run the following commands: pip install <cpuarch>\<MarkupSafe file name> pip install <Jinja file name> pip install <cpuarch>\<MsgPack file name> pip install <cpuarch>\<psutil file name> pip install <cpuarch>\<PyYAML file name> pip install <cpuarch>\<pyzmq file name> pip install <WMI file name> pip install <requests file name> pip install <certifi file name> 2. Build the Salt Environment · Clone Salt Open a command prompt and navigate to C:\Salt-Dev. Run the following command to clone salt: git clone https://github.com/saltstack/salt · Checkout Branch Checkout the branch or tag of salt you want to work on or build. Open a command prompt and navigate to C:\Salt-Dev\salt. Get a list of available tags and branches by running the following commands: git fetch --all To view a list of available branches: git branch -a To view a list of availabel tags: git tag -n Checkout the branch or tag by typing the following command: git checkout <branch/tag name> · Clean the Environment When switching between branches residual files can be left behind that will interfere with the functionality of salt. Therefore, after you check out the branch you want to work on, type the following commands to clean the salt environment: Developing with Salt There are two ways to develop with salt. You can run salt's setup.py each time you make a change to source code or you can use the setup tools develop mode. Configure the Minion Both methods require that the minion configuration be in the C:\salt directory. Copy the conf and var directories from C:\Salt-Dev\salt\pkg\ windows\buildenv to C:\salt. Now go into the C:\salt\conf directory and edit the file name minion (no extension). You need to configure the master and id parameters in this file. Edit the following lines: master: <ip or name of your master> id: <name of your minion> Setup.py Method Go into the C:\Salt-Dev\salt directory from a cmd prompt and type: python setup.py install --force This will install python into your python installation at C:\Python27. Everytime you make an edit to your source code, you'll have to stop the minion, run the setup, and start the minion. To start the salt-minion go into C:\Python27\Scripts from a cmd prompt and type: salt-minion For debug mode type: salt-minion -l debug To stop the minion press Ctrl+C. Setup Tools Develop Mode (Preferred Method) To use the Setup Tools Develop Mode go into C:\Salt-Dev\salt from a cmd prompt and type: pip install -e . This will install pointers to your source code that resides at C:\Salt-Dev\salt. When you edit your source code you only have to restart the minion. Build the windows installer This is the method of building the installer as of version 2014.7.4. Clean the Environment Make sure you don't have any leftover salt files from previous versions of salt in your Python directory. 1. Remove all files that start with salt in the C:\Python27\Scripts directory 2. Remove all files and directorys that start with salt in the C:\Python27\Lib\site-pack‐ ages directory Install Salt Install salt using salt's setup.py. From the C:\Salt-Dev\salt directory type the following command: python setup.py install --force Build the Installer From cmd prompt go into the C:\Salt-Dev\salt\pkg\windows directory. Type the following command for the branch or tag of salt you're building: BuildSalt.bat <branch or tag> This will copy python with salt installed to the buildenv\bin directory, make it portable, and then create the windows installer . The .exe for the windows installer will be placed in the installer directory. Testing the Salt minion 1. Create the directory C:\salt (if it doesn't exist already) 2. Copy the example conf and var directories from pkg/windows/buildenv/ into C:\salt 3. Edit C:\salt\conf\minion master: ipaddress or hostname of your salt-master 4. Start the salt-minion cd C:\Python27\Scripts python salt-minion 5. On the salt-master accept the new minion's key sudo salt-key -A This accepts all unaccepted keys. If you're concerned about security just accept the key for this specific minion. 6. Test that your minion is responding On the salt-master run: sudo salt '*' test.ping You should get the following response: {'your minion hostname': True} Single command bootstrap script On a 64 bit Windows host the following script makes an unattended install of salt, includ‐ ing all dependencies: Not up to date. This script is not up to date. Please use the installer found above # (All in one line.) "PowerShell (New-Object System.Net.WebClient).DownloadFile('http://csa-net.dk/salt/bootstr ↲ ap64.bat','C:\bootstrap.bat');(New-Object -com Shell.Application).ShellExecute('C:\bootstrap.bat');" You can execute the above command remotely from a Linux host using winexe: winexe -U "administrator" //fqdn "PowerShell (New-Object ......);" For more info check http://csa-net.dk/salt Packages management under Windows 2003 On windows Server 2003, you need to install optional component "wmi windows installer provider" to have full list of installed packages. If you don't have this, salt-minion can't report some installed software. SUSE With openSUSE 13.2, Salt 2014.1.11 is available in the primary repositories. The devel:language:python repo will have more up to date versions of salt, all package devel‐ opment will be done there. Installation Salt can be installed using zypper and is available in the standard openSUSE repositories. Stable Release Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions. zypper install salt-master zypper install salt-minion Post-installation tasks openSUSE Master To have the Master start automatically at boot time: systemctl enable salt-master.service To start the Master: systemctl start salt-master.service Minion To have the Minion start automatically at boot time: systemctl enable salt-minion.service To start the Minion: systemctl start salt-minion.service Post-installation tasks SLES Master To have the Master start automatically at boot time: chkconfig salt-master on To start the Master: rcsalt-master start Minion To have the Minion start automatically at boot time: chkconfig salt-minion on To start the Minion: rcsalt-minion start Unstable Release openSUSE For openSUSE Factory run the following as root: zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_F ↲ actory/devel:languages:python.repo zypper refresh zypper install salt salt-minion salt-master For openSUSE 13.2 run the following as root: zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_1 ↲ 3.2/devel:languages:python.repo zypper refresh zypper install salt salt-minion salt-master For openSUSE 13.1 run the following as root: zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_1 ↲ 3.1/devel:languages:python.repo zypper refresh zypper install salt salt-minion salt-master For bleeding edge python Factory run the following as root: zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/bleeding_e ↲ dge_python_Factory/devel:languages:python.repo zypper refresh zypper install salt salt-minion salt-master Suse Linux Enterprise For SLE 12 run the following as root: zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_12/dev ↲ el:languages:python.repo zypper refresh zypper install salt salt-minion salt-master For SLE 11 SP3 run the following as root: zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_11_SP3 ↲ /devel:languages:python.repo zypper refresh zypper install salt salt-minion salt-master For SLE 11 SP2 run the following as root: zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_11_SP2 ↲ /devel:languages:python.repo zypper refresh zypper install salt salt-minion salt-master Now go to the Configuring Salt page. Dependencies Salt should run on any Unix-like platform so long as the dependencies are met. · Python 2.6 >= 2.6 <3.0 · msgpack-python - High-performance message interchange format · YAML - Python YAML bindings · Jinja2 - parsing Salt States (configurable in the master settings) · MarkupSafe - Implements a XML/HTML/XHTML Markup safe string for Python · apache-libcloud - Python lib for interacting with many of the popular cloud service providers using a unified API · Requests - HTTP library Depending on the chosen Salt transport, ZeroMQ or RAET, dependencies vary: · ZeroMQ: · ZeroMQ >= 3.2.0 · pyzmq >= 2.2.0 - ZeroMQ Python bindings · PyCrypto - The Python cryptography toolkit · RAET: · libnacl - Python bindings to libsodium · ioflo - The flo programming interface raet and salt-raet is built on · RAET - The worlds most awesome UDP protocol Salt defaults to the ZeroMQ transport, and the choice can be made at install time, for example: python setup.py --salt-transport=raet install This way, only the required dependencies are pulled by the setup script if need be. If installing using pip, the --salt-transport install option can be provided like: pip install --install-option="--salt-transport=raet" salt NOTE: Salt does not bundle dependencies that are typically distributed as part of the base OS. If you have unmet dependencies and are using a custom or minimal installation, you might need to install some additional packages from your OS vendor. Optional Dependencies · mako - an optional parser for Salt States (configurable in the master settings) · gcc - dynamic Cython module compiling Upgrading Salt When upgrading Salt, the master(s) should always be upgraded first. Backward compatibil‐ ity for minions running newer versions of salt than their masters is not guaranteed. Whenever possible, backward compatibility between new masters and old minions will be pre‐ served. Generally, the only exception to this policy is in case of a security vulnerabil‐ ity.
TUTORIALS Introduction Salt Masterless Quickstart Running a masterless salt-minion lets you use Salt's configuration management for a single machine without calling out to a Salt master on another machine. Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things: · Stand up a master server via States (Salting a Salt Master) · Use salt-call commands on a system without connectivity to a master · Masterless States, run states entirely from files local to the minion It is also useful for testing out state trees before deploying to a production setup. Bootstrap Salt Minion The salt-bootstrap script makes bootstrapping a server with Salt simple for any OS with a Bourne shell: curl -L https://bootstrap.saltstack.com -o install_salt.sh sudo sh install_salt.sh See the salt-bootstrap documentation for other one liners. When using Vagrant to test out salt, the Vagrant salt provisioner will provision the VM for you. Telling Salt to Run Masterless To instruct the minion to not look for a master, the file_client configuration option needs to be set in the minion configuration file. By default the file_client is set to remote so that the minion gathers file server and pillar data from the salt master. When setting the file_client option to local the minion is configured to not gather this data from the master. file_client: local Now the salt minion will not look for a master and will assume that the local system has all of the file and pillar resources. NOTE: When running Salt in masterless mode, do not run the salt-minion daemon. Otherwise, it will attempt to connect to a master and fail. The salt-call command stands on its own and does not need the salt-minion daemon. Create State Tree Following the successful installation of a salt-minion, the next step is to create a state tree, which is where the SLS files that comprise the possible states of the minion are stored. The following example walks through the steps necessary to create a state tree that ensures that the server has the Apache webserver installed. NOTE: For a complete explanation on Salt States, see the tutorial. 1. Create the top.sls file: /srv/salt/top.sls: base: '*': - webserver 2. Create the webserver state tree: /srv/salt/webserver.sls: apache: # ID declaration pkg: # state declaration - installed # function declaration NOTE: The apache package has different names on different platforms, for instance on Debian/Ubuntu it is apache2, on Fedora/RHEL it is httpd and on Arch it is apache The only thing left is to provision our minion using salt-call and the highstate command. Salt-call The salt-call command is used to run module functions locally on a minion instead of exe‐ cuting them from the master. Normally the salt-call command checks into the master to retrieve file server and pillar data, but when running standalone salt-call needs to be instructed to not check the master for this data: salt-call --local state.highstate The --local flag tells the salt-minion to look for the state tree in the local file system and not to contact a Salt Master for instructions. To provide verbose output, use -l debug: salt-call --local state.highstate -l debug The minion first examines the top.sls file and determines that it is a part of the group matched by * glob and that the webserver SLS should be applied. It then examines the webserver.sls file and finds the apache state, which installs the Apache package. The minion should now have Apache installed, and the next step is to begin learning how to write more complex states. Basics Standalone Minion Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things: · Use salt-call commands on a system without connectivity to a master · Masterless States, run states entirely from files local to the minion NOTE: When running Salt in masterless mode, do not run the salt-minion daemon. Otherwise, it will attempt to connect to a master and fail. The salt-call command stands on its own and does not need the salt-minion daemon. Telling Salt Call to Run Masterless The salt-call command is used to run module functions locally on a minion instead of exe‐ cuting them from the master. Normally the salt-call command checks into the master to retrieve file server and pillar data, but when running standalone salt-call needs to be instructed to not check the master for this data. To instruct the minion to not look for a master when running salt-call the file_client configuration option needs to be set. By default the file_client is set to remote so that the minion knows that file server and pillar data are to be gathered from the master. When setting the file_client option to local the minion is configured to not gather this data from the master. file_client: local Now the salt-call command will not look for a master and will assume that the local system has all of the file and pillar resources. Running States Masterless The state system can be easily run without a Salt master, with all needed files local to the minion. To do this the minion configuration file needs to be set up to know how to return file_roots information like the master. The file_roots setting defaults to /srv/salt for the base environment just like on the master: file_roots: base: - /srv/salt Now set up the Salt State Tree, top file, and SLS modules in the same way that they would be set up on a master. Now, with the file_client option set to local and an available state tree then calls to functions in the state module will use the information in the file_roots on the minion instead of checking in with the master. Remember that when creating a state tree on a minion there are no syntax or path changes needed, SLS modules written to be used from a master do not need to be modified in any way to work with a minion. This makes it easy to "script" deployments with Salt states without having to set up a master, and allows for these SLS modules to be easily moved into a Salt master as the deployment grows. The declared state can now be executed with: salt-call state.highstate Or the salt-call command can be executed with the --local flag, this makes it unnecessary to change the configuration file: salt-call state.highstate --local External Pillars External pillars are supported when running in masterless mode. Opening the Firewall up for Salt The Salt master communicates with the minions using an AES-encrypted ZeroMQ connection. These communications are done over TCP ports 4505 and 4506, which need to be accessible on the master only. This document outlines suggested firewall rules for allowing these incom‐ ing connections to the master. NOTE: No firewall configuration needs to be done on Salt minions. These changes refer to the master only. Fedora 18 and beyond / RHEL 7 / CentOS 7 Starting with Fedora 18 FirewallD is the tool that is used to dynamically manage the fire‐ wall rules on a host. It has support for IPv4/6 settings and the separation of runtime and permanent configurations. To interact with FirewallD use the command line client fire‐ wall-cmd. firewall-cmd example: firewall-cmd --permanent --zone=<zone> --add-port=4505-4506/tcp Please choose the desired zone according to your setup. Don't forget to reload after you made your changes. firewall-cmd --reload RHEL 6 / CentOS 6 The lokkit command packaged with some Linux distributions makes opening iptables firewall ports very simple via the command line. Just be careful to not lock out access to the server by neglecting to open the ssh port. lokkit example: lokkit -p 22:tcp -p 4505:tcp -p 4506:tcp The system-config-firewall-tui command provides a text-based interface to modifying the firewall. system-config-firewall-tui: system-config-firewall-tui openSUSE Salt installs firewall rules in /etc/sysconfig/SuSEfirewall2.d/services/salt. Enable with: SuSEfirewall2 open SuSEfirewall2 start If you have an older package of Salt where the above configuration file is not included, the SuSEfirewall2 command makes opening iptables firewall ports very simple via the com‐ mand line. SuSEfirewall example: SuSEfirewall2 open EXT TCP 4505 SuSEfirewall2 open EXT TCP 4506 The firewall module in YaST2 provides a text-based interface to modifying the firewall. YaST2: yast2 firewall iptables Different Linux distributions store their iptables (also known as netfilter) rules in dif‐ ferent places, which makes it difficult to standardize firewall documentation. Included are some of the more common locations, but your mileage may vary. Fedora / RHEL / CentOS: /etc/sysconfig/iptables Arch Linux: /etc/iptables/iptables.rules Debian Follow these instructions: https://wiki.debian.org/iptables Once you've found your firewall rules, you'll need to add the two lines below to allow traffic on tcp/4505 and tcp/4506: -A INPUT -m state --state new -m tcp -p tcp --dport 4505 -j ACCEPT -A INPUT -m state --state new -m tcp -p tcp --dport 4506 -j ACCEPT Ubuntu Salt installs firewall rules in /etc/ufw/applications.d/salt.ufw. Enable with: ufw allow salt pf.conf The BSD-family of operating systems uses packet filter (pf). The following example describes the additions to pf.conf needed to access the Salt master. pass in on $int_if proto tcp from any to $int_if port 4505 pass in on $int_if proto tcp from any to $int_if port 4506 Once these additions have been made to the pf.conf the rules will need to be reloaded. This can be done using the pfctl command. pfctl -vf /etc/pf.conf Whitelist communication to Master There are situations where you want to selectively allow Minion traffic from specific hosts or networks into your Salt Master. The first scenario which comes to mind is to pre‐ vent unwanted traffic to your Master out of security concerns, but another scenario is to handle Minion upgrades when there are backwards incompatible changes between the installed Salt versions in your environment. Here is an example Linux iptables ruleset to be set on the Master: # Allow Minions from these networks -I INPUT -s 10.1.2.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT -I INPUT -s 10.1.3.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT # Allow Salt to communicate with Master on the loopback interface -A INPUT -i lo -p tcp -m multiport --dports 4505,4506 -j ACCEPT # Reject everything else -A INPUT -p tcp -m multiport --dports 4505,4506 -j REJECT NOTE: The important thing to note here is that the salt command needs to communicate with the listening network socket of salt-master on the loopback interface. Without this you will see no outgoing Salt traffic from the master, even for a simple salt '*' test.ping, because the salt client never reached the salt-master to tell it to carry out the execution. Using cron with Salt The Salt Minion can initiate its own highstate using the salt-call command. $ salt-call state.highstate This will cause the minion to check in with the master and ensure it is in the correct 'state'. Use cron to initiate a highstate If you would like the Salt Minion to regularly check in with the master you can use the venerable cron to run the salt-call command. # PATH=/bin:/sbin:/usr/bin:/usr/sbin 00 00 * * * salt-call state.highstate The above cron entry will run a highstate every day at midnight. NOTE: Be aware that you may need to ensure the PATH for cron includes any scripts or commands that need to be executed. Remote execution tutorial Before continuing make sure you have a working Salt installation by following the instal‐ lation and the configuration instructions. Stuck? There are many ways to get help from the Salt community including our mailing list and our IRC channel #salt. Order your minions around Now that you have a master and at least one minion communicating with each other you can perform commands on the minion via the salt command. Salt calls are comprised of three main components: salt '<target>' <function> [arguments] SEE ALSO: salt manpage target The target component allows you to filter which minions should run the following function. The default filter is a glob on the minion id. For example: salt '*' test.ping salt '*.example.org' test.ping Targets can be based on minion system information using the Grains system: salt -G 'os:Ubuntu' test.ping SEE ALSO: Grains system Targets can be filtered by regular expression: salt -E 'virtmach[0-9]' test.ping Targets can be explicitly specified in a list: salt -L 'foo,bar,baz,quo' test.ping Or Multiple target types can be combined in one command: salt -C 'G@os:Ubuntu and webser* or E@database.*' test.ping function A function is some functionality provided by a module. Salt ships with a large collection of available functions. List all available functions on your minions: salt '*' sys.doc Here are some examples: Show all currently available minions: salt '*' test.ping Run an arbitrary shell command: salt '*' cmd.run 'uname -a' SEE ALSO: the full list of modules arguments Space-delimited arguments to the function: salt '*' cmd.exec_code python 'import sys; print sys.version' Optional, keyword arguments are also supported: salt '*' pip.install salt timeout=5 upgrade=True They are always in the form of kwarg=argument. Pillar Walkthrough NOTE: This walkthrough assumes that the reader has already completed the initial Salt walk‐ through. Pillars are tree-like structures of data defined on the Salt Master and passed through to minions. They allow confidential, targeted data to be securely sent only to the relevant minion. NOTE: Grains and Pillar are sometimes confused, just remember that Grains are data about a minion which is stored or generated from the minion. This is why information like the OS and CPU type are found in Grains. Pillar is information about a minion or many min‐ ions stored or generated on the Salt Master. Pillar data is useful for: Highly Sensitive Data: Information transferred via pillar is guaranteed to only be presented to the min‐ ions that are targeted, making Pillar suitable for managing security information, such as cryptographic keys and passwords. Minion Configuration: Minion modules such as the execution modules, states, and returners can often be configured via data stored in pillar. Variables: Variables which need to be assigned to specific minions or groups of minions can be defined in pillar and then accessed inside sls formulas and template files. Arbitrary Data: Pillar can contain any basic data structure in dictionary format, so a key/value store can be defined making it easy to iterate over a group of values in sls formu‐ las. Pillar is therefore one of the most important systems when using Salt. This walkthrough is designed to get a simple Pillar up and running in a few minutes and then to dive into the capabilities of Pillar and where the data is available. Setting Up Pillar The pillar is already running in Salt by default. To see the minion's pillar data: salt '*' pillar.items NOTE: Prior to version 0.16.2, this function is named pillar.data. This function name is still supported for backwards compatibility. By default the contents of the master configuration file are loaded into pillar for all minions. This enables the master configuration file to be used for global configuration of minions. Similar to the state tree, the pillar is comprised of sls files and has a top file. The default location for the pillar is in /srv/pillar. NOTE: The pillar location can be configured via the pillar_roots option inside the master configuration file. It must not be in a subdirectory of the state tree or file_roots. If the pillar is under file_roots, any pillar targeting can be bypassed by minions. To start setting up the pillar, the /srv/pillar directory needs to be present: mkdir /srv/pillar Now create a simple top file, following the same format as the top file used for states: /srv/pillar/top.sls: base: '*': - data This top file associates the data.sls file to all minions. Now the /srv/pillar/data.sls file needs to be populated: /srv/pillar/data.sls: info: some data To ensure that the minions have the new pillar data, issue a command to them asking that they fetch their pillars from the master: salt '*' saltutil.refresh_pillar Now that the minions have the new pillar, it can be retrieved: salt '*' pillar.items The key info should now appear in the returned pillar data. More Complex Data Unlike states, pillar files do not need to define formulas. This example sets up user data with a UID: /srv/pillar/users/init.sls: users: thatch: 1000 shouse: 1001 utahdave: 1002 redbeard: 1003 NOTE: The same directory lookups that exist in states exist in pillar, so the file users/init.sls can be referenced with users in the top file. The top file will need to be updated to include this sls file: /srv/pillar/top.sls: base: '*': - data - users Now the data will be available to the minions. To use the pillar data in a state, you can use Jinja: /srv/salt/users/init.sls {% for user, uid in pillar.get('users', {}).items() %} {{user}}: user.present: - uid: {{uid}} {% endfor %} This approach allows for users to be safely defined in a pillar and then the user data is applied in an sls file. Parameterizing States With Pillar Pillar data can be accessed in state files to customise behavior for each minion. All pil‐ lar (and grain) data applicable to each minion is substituted into the state files through templating before being run. Typical uses include setting directories appropriate for the minion and skipping states that don't apply. A simple example is to set up a mapping of package names in pillar for separate Linux dis‐ tributions: /srv/pillar/pkg/init.sls: pkgs: {% if grains['os_family'] == 'RedHat' %} apache: httpd vim: vim-enhanced {% elif grains['os_family'] == 'Debian' %} apache: apache2 vim: vim {% elif grains['os'] == 'Arch' %} apache: apache vim: vim {% endif %} The new pkg sls needs to be added to the top file: /srv/pillar/top.sls: base: '*': - data - users - pkg Now the minions will auto map values based on respective operating systems inside of the pillar, so sls files can be safely parameterized: /srv/salt/apache/init.sls: apache: pkg.installed: - name: {{ pillar['pkgs']['apache'] }} Or, if no pillar is available a default can be set as well: NOTE: The function pillar.get used in this example was added to Salt in version 0.14.0 /srv/salt/apache/init.sls: apache: pkg.installed: - name: {{ salt['pillar.get']('pkgs:apache', 'httpd') }} In the above example, if the pillar value pillar['pkgs']['apache'] is not set in the min‐ ion's pillar, then the default of httpd will be used. NOTE: Under the hood, pillar is just a Python dict, so Python dict methods such as get and items can be used. Pillar Makes Simple States Grow Easily One of the design goals of pillar is to make simple sls formulas easily grow into more flexible formulas without refactoring or complicating the states. A simple formula: /srv/salt/edit/vim.sls: vim: pkg.installed: [] /etc/vimrc: file.managed: - source: salt://edit/vimrc - mode: 644 - user: root - group: root - require: - pkg: vim Can be easily transformed into a powerful, parameterized formula: /srv/salt/edit/vim.sls: vim: pkg.installed: - name: {{ pillar['pkgs']['vim'] }} /etc/vimrc: file.managed: - source: {{ pillar['vimrc'] }} - mode: 644 - user: root - group: root - require: - pkg: vim Where the vimrc source location can now be changed via pillar: /srv/pillar/edit/vim.sls: {% if grains['id'].startswith('dev') %} vimrc: salt://edit/dev_vimrc {% elif grains['id'].startswith('qa') %} vimrc: salt://edit/qa_vimrc {% else %} vimrc: salt://edit/vimrc {% endif %} Ensuring that the right vimrc is sent out to the correct minions. Setting Pillar Data on the Command Line Pillar data can be set on the command line like so: salt '*' state.highstate pillar='{"foo": "bar"}' The state.sls command can also be used to set pillar values via the command line: salt '*' state.sls my_sls_file pillar='{"hello": "world"}' NOTE: If a key is passed on the command line that already exists on the minion, the key that is passed in will overwrite the entire value of that key, rather than merging only the specified value set via the command line. The example below will swap the value for vim with telnet in the previously specified list, notice the nested pillar dict: salt '*' state.sls edit.vim pillar='{"pkgs": {"vim": "telnet"}}' NOTE: This will attempt to install telnet on your minions, feel free to uninstall the package or replace telnet value with anything else. More On Pillar Pillar data is generated on the Salt master and securely distributed to minions. Salt is not restricted to the pillar sls files when defining the pillar but can retrieve data from external sources. This can be useful when information about an infrastructure is stored in a separate location. Reference information on pillar and the external pillar interface can be found in the Salt documentation: Pillar Minion Config in Pillar Minion configuration options can be set on pillars. Any option that you want to modify, should be in the first level of the pillars, in the same way you set the options in the config file. For example, to configure the MySQL root password to be used by MySQL Salt execution module: mysql.pass: hardtoguesspassword This is very convenient when you need some dynamic configuration change that you want to be applied on the fly. For example, there is a chicken and the egg problem if you do this: mysql-admin-passwd: mysql_user.present: - name: root - password: somepasswd mydb: mysql_db.present The second state will fail, because you changed the root password and the minion didn't notice it. Setting mysql.pass in the pillar, will help to sort out the issue. But always change the root admin password in the first place. This is very helpful for any module that needs credentials to apply state changes: mysql, keystone, etc. States How Do I Use Salt States? Simplicity, Simplicity, Simplicity Many of the most powerful and useful engineering solutions are founded on simple princi‐ ples. Salt States strive to do just that: K.I.S.S. (Keep It Stupidly Simple) The core of the Salt State system is the SLS, or SaLt State file. The SLS is a representa‐ tion of the state in which a system should be in, and is set up to contain this data in a simple format. This is often called configuration management. NOTE: This is just the beginning of using states, make sure to read up on pillar Pillar next. It is All Just Data Before delving into the particulars, it will help to understand that the SLS file is just a data structure under the hood. While understanding that the SLS is just a data structure isn't critical for understanding and making use of Salt States, it should help bolster knowledge of where the real power is. SLS files are therefore, in reality, just dictionaries, lists, strings, and numbers. By using this approach Salt can be much more flexible. As one writes more state files, it becomes clearer exactly what is being written. The result is a system that is easy to understand, yet grows with the needs of the admin or developer. The Top File The example SLS files in the below sections can be assigned to hosts using a file called top.sls. This file is described in-depth here. Default Data - YAML By default Salt represents the SLS data in what is one of the simplest serialization for‐ mats available - YAML. A typical SLS file will often look like this in YAML: NOTE: These demos use some generic service and package names, different distributions often use different names for packages and services. For instance apache should be replaced with httpd on a Red Hat system. Salt uses the name of the init script, systemd name, upstart name etc. based on what the underlying service management for the platform. To get a list of the available service names on a platform execute the service.get_all salt function. Information on how to make states work with multiple distributions is later in the tutorial. apache: pkg.installed: [] service.running: - require: - pkg: apache This SLS data will ensure that the package named apache is installed, and that the apache service is running. The components can be explained in a simple way. The first line is the ID for a set of data, and it is called the ID Declaration. This ID sets the name of the thing that needs to be manipulated. The second and third lines contain the state module function to be run, in the format <state_module>.<function>. The pkg.installed state module function ensures that a software package is installed via the system's native package manager. The service.running state module function ensures that a given system daemon is running. Finally, on line five, is the word require. This is called a Requisite Statement, and it makes sure that the Apache service is only started after a successful installation of the apache package. Adding Configs and Users When setting up a service like an Apache web server, many more components may need to be added. The Apache configuration file will most likely be managed, and a user and group may need to be set up. apache: pkg.installed: [] service.running: - watch: - pkg: apache - file: /etc/httpd/conf/httpd.conf - user: apache user.present: - uid: 87 - gid: 87 - home: /var/www/html - shell: /bin/nologin - require: - group: apache group.present: - gid: 87 - require: - pkg: apache /etc/httpd/conf/httpd.conf: file.managed: - source: salt://apache/httpd.conf - user: root - group: root - mode: 644 This SLS data greatly extends the first example, and includes a config file, a user, a group and new requisite statement: watch. Adding more states is easy, since the new user and group states are under the Apache ID, the user and group will be the Apache user and group. The require statements will make sure that the user will only be made after the group, and that the group will be made only after the Apache package is installed. Next, the require statement under service was changed to watch, and is now watching 3 states instead of just one. The watch statement does the same thing as require, making sure that the other states run before running the state with a watch, but it adds an extra component. The watch statement will run the state's watcher function for any changes to the watched states. So if the package was updated, the config file changed, or the user uid modified, then the service state's watcher will be run. The service state's watcher just restarts the service, so in this case, a change in the config file will also trigger a restart of the respective service. Moving Beyond a Single SLS When setting up Salt States in a scalable manner, more than one SLS will need to be used. The above examples were in a single SLS file, but two or more SLS files can be combined to build out a State Tree. The above example also references a file with a strange source - salt://apache/httpd.conf. That file will need to be available as well. The SLS files are laid out in a directory structure on the Salt master; an SLS is just a file and files to download are just files. The Apache example would be laid out in the root of the Salt file server like this: apache/init.sls apache/httpd.conf So the httpd.conf is just a file in the apache directory, and is referenced directly. Do not use dots in SLS file names or their directories The initial implementation of top.sls and include-declaration followed the python import model where a slash is represented as a period. This means that a SLS file with a period in the name ( besides the suffix period) can not be ref‐ erenced. For example, webserver_1.0.sls is not referenceable because web‐ server_1.0 would refer to the directory/file webserver_1/0.sls The same applies for any subdirecortories, this is especially 'tricky' when git repos are created. Another command that typically can't render it's output is `state.show_sls` of a file in a path that contains a dot. But when using more than one single SLS file, more components can be added to the toolkit. Consider this SSH example: ssh/init.sls: openssh-client: pkg.installed /etc/ssh/ssh_config: file.managed: - user: root - group: root - mode: 644 - source: salt://ssh/ssh_config - require: - pkg: openssh-client ssh/server.sls: include: - ssh openssh-server: pkg.installed sshd: service.running: - require: - pkg: openssh-client - pkg: openssh-server - file: /etc/ssh/banner - file: /etc/ssh/sshd_config /etc/ssh/sshd_config: file.managed: - user: root - group: root - mode: 644 - source: salt://ssh/sshd_config - require: - pkg: openssh-server /etc/ssh/banner: file: - managed - user: root - group: root - mode: 644 - source: salt://ssh/banner - require: - pkg: openssh-server NOTE: Notice that we use two similar ways of denoting that a file is managed by Salt. In the /etc/ssh/sshd_config state section above, we use the file.managed state declaration whereas with the /etc/ssh/banner state section, we use the file state declaration and add a managed attribute to that state declaration. Both ways produce an identical result; the first way -- using file.managed -- is merely a shortcut. Now our State Tree looks like this: apache/init.sls apache/httpd.conf ssh/init.sls ssh/server.sls ssh/banner ssh/ssh_config ssh/sshd_config This example now introduces the include statement. The include statement includes another SLS file so that components found in it can be required, watched or as will soon be demon‐ strated - extended. The include statement allows for states to be cross linked. When an SLS has an include statement it is literally extended to include the contents of the included SLS files. Note that some of the SLS files are called init.sls, while others are not. More info on what this means can be found in the States Tutorial. Extending Included SLS Data Sometimes SLS data needs to be extended. Perhaps the apache service needs to watch addi‐ tional resources, or under certain circumstances a different file needs to be placed. In these examples, the first will add a custom banner to ssh and the second will add more watchers to apache to include mod_python. ssh/custom-server.sls: include: - ssh.server extend: /etc/ssh/banner: file: - source: salt://ssh/custom-banner python/mod_python.sls: include: - apache extend: apache: service: - watch: - pkg: mod_python mod_python: pkg.installed The custom-server.sls file uses the extend statement to overwrite where the banner is being downloaded from, and therefore changing what file is being used to configure the banner. In the new mod_python SLS the mod_python package is added, but more importantly the apache service was extended to also watch the mod_python package. Using extend with require or watch The extend statement works differently for require or watch. It appends to, rather than replacing the requisite component. Understanding the Render System Since SLS data is simply that (data), it does not need to be represented with YAML. Salt defaults to YAML because it is very straightforward and easy to learn and use. But the SLS files can be rendered from almost any imaginable medium, so long as a renderer module is provided. The default rendering system is the yaml_jinja renderer. The yaml_jinja renderer will first pass the template through the Jinja2 templating system, and then through the YAML parser. The benefit here is that full programming constructs are available when creating SLS files. Other renderers available are yaml_mako and yaml_wempy which each use the Mako or Wempy templating system respectively rather than the jinja templating system, and more notably, the pure Python or py, pydsl & pyobjects renderers. The py renderer allows for SLS files to be written in pure Python, allowing for the utmost level of flexibility and power when preparing SLS data; while the pydsl renderer provides a flexible, domain-specific language for authoring SLS data in Python; and the pyobjects renderer gives you a "Pythonic" inter‐ face to building state data. NOTE: The templating engines described above aren't just available in SLS files. They can also be used in file.managed states, making file management much more dynamic and flex‐ ible. Some examples for using templates in managed files can be found in the documenta‐ tion for the file states, as well as the MooseFS example below. Getting to Know the Default - yaml_jinja The default renderer - yaml_jinja, allows for use of the jinja templating system. A guide to the Jinja templating system can be found here: http://jinja.pocoo.org/docs When working with renderers a few very useful bits of data are passed in. In the case of templating engine based renderers, three critical components are available, salt, grains, and pillar. The salt object allows for any Salt function to be called from within the tem‐ plate, and grains allows for the Grains to be accessed from within the template. A few examples: apache/init.sls: apache: pkg.installed: {% if grains['os'] == 'RedHat'%} - name: httpd {% endif %} service.running: {% if grains['os'] == 'RedHat'%} - name: httpd {% endif %} - watch: - pkg: apache - file: /etc/httpd/conf/httpd.conf - user: apache user.present: - uid: 87 - gid: 87 - home: /var/www/html - shell: /bin/nologin - require: - group: apache group.present: - gid: 87 - require: - pkg: apache /etc/httpd/conf/httpd.conf: file.managed: - source: salt://apache/httpd.conf - user: root - group: root - mode: 644 This example is simple. If the os grain states that the operating system is Red Hat, then the name of the Apache package and service needs to be httpd. A more aggressive way to use Jinja can be found here, in a module to set up a MooseFS dis‐ tributed filesystem chunkserver: moosefs/chunk.sls: include: - moosefs {% for mnt in salt['cmd.run']('ls /dev/data/moose*').split() %} /mnt/moose{{ mnt[-1] }}: mount.mounted: - device: {{ mnt }} - fstype: xfs - mkmnt: True file.directory: - user: mfs - group: mfs - require: - user: mfs - group: mfs {% endfor %} /etc/mfshdd.cfg: file.managed: - source: salt://moosefs/mfshdd.cfg - user: root - group: root - mode: 644 - template: jinja - require: - pkg: mfs-chunkserver /etc/mfschunkserver.cfg: file.managed: - source: salt://moosefs/mfschunkserver.cfg - user: root - group: root - mode: 644 - template: jinja - require: - pkg: mfs-chunkserver mfs-chunkserver: pkg.installed: [] mfschunkserver: service.running: - require: {% for mnt in salt['cmd.run']('ls /dev/data/moose*') %} - mount: /mnt/moose{{ mnt[-1] }} - file: /mnt/moose{{ mnt[-1] }} {% endfor %} - file: /etc/mfschunkserver.cfg - file: /etc/mfshdd.cfg - file: /var/lib/mfs This example shows much more of the available power of Jinja. Multiple for loops are used to dynamically detect available hard drives and set them up to be mounted, and the salt object is used multiple times to call shell commands to gather data. Introducing the Python, PyDSL, and the Pyobjects Renderers Sometimes the chosen default renderer might not have enough logical power to accomplish the needed task. When this happens, the Python renderer can be used. Normally a YAML ren‐ derer should be used for the majority of SLS files, but an SLS file set to use another renderer can be easily added to the tree. This example shows a very basic Python SLS file: python/django.sls: #!py def run(): ''' Install the django package ''' return {'include': ['python'], 'django': {'pkg': ['installed']}} This is a very simple example; the first line has an SLS shebang that tells Salt to not use the default renderer, but to use the py renderer. Then the run function is defined, the return value from the run function must be a Salt friendly data structure, or better known as a Salt HighState data structure. Alternatively, using the pydsl renderer, the above example can be written more succinctly as: #!pydsl include('python', delayed=True) state('django').pkg.installed() The pyobjects renderer provides an "Pythonic" object based approach for building the state data. The above example could be written as: #!pyobjects include('python') Pkg.installed("django") These Python examples would look like this if they were written in YAML: include: - python django: pkg.installed This example clearly illustrates that; one, using the YAML renderer by default is a wise decision and two, unbridled power can be obtained where needed by using a pure Python SLS. Running and debugging salt states. Once the rules in an SLS are ready, they should be tested to ensure they work properly. To invoke these rules, simply execute salt '*' state.highstate on the command line. If you get back only hostnames with a : after, but no return, chances are there is a problem with one or more of the sls files. On the minion, use the salt-call command: salt-call state.highstate -l debug to examine the output for errors. This should help troubleshoot the issue. The minions can also be started in the foreground in debug mode: salt-minion -l debug. Next Reading With an understanding of states, the next recommendation is to become familiar with Salt's pillar interface: Pillar Walkthrough States tutorial, part 1 - Basic Usage The purpose of this tutorial is to demonstrate how quickly you can configure a system to be managed by Salt States. For detailed information about the state system please refer to the full states reference. This tutorial will walk you through using Salt to configure a minion to run the Apache HTTP server and to ensure the server is running. Before continuing make sure you have a working Salt installation by following the instal‐ lation and the configuration instructions. Stuck? There are many ways to get help from the Salt community including our mailing list and our IRC channel #salt. Setting up the Salt State Tree States are stored in text files on the master and transferred to the minions on demand via the master's File Server. The collection of state files make up the State Tree. To start using a central state system in Salt, the Salt File Server must first be set up. Edit the master config file (file_roots) and uncomment the following lines: file_roots: base: - /srv/salt NOTE: If you are deploying on FreeBSD via ports, the file_roots path defaults to /usr/local/etc/salt/states. Restart the Salt master in order to pick up this change: pkill salt-master salt-master -d Preparing the Top File On the master, in the directory uncommented in the previous step, (/srv/salt by default), create a new file called top.sls and add the following: base: '*': - webserver The top file is separated into environments (discussed later). The default environment is base. Under the base environment a collection of minion matches is defined; for now simply specify all hosts (*). Targeting minions The expressions can use any of the targeting mechanisms used by Salt — minions can be matched by glob, PCRE regular expression, or by grains. For example: base: 'os:Fedora': - match: grain - webserver Create an sls file In the same directory as the top file, create a file named webserver.sls, containing the following: apache: # ID declaration pkg: # state declaration - installed # function declaration The first line, called the id-declaration, is an arbitrary identifier. In this case it defines the name of the package to be installed. NOTE: The package name for the Apache httpd web server may differ depending on OS or distro — for example, on Fedora it is httpd but on Debian/Ubuntu it is apache2. The second line, called the state-declaration, defines which of the Salt States we are using. In this example, we are using the pkg state to ensure that a given package is installed. The third line, called the function-declaration, defines which function in the pkg state module to call. Renderers States sls files can be written in many formats. Salt requires only a simple data structure and is not concerned with how that data structure is built. Tem‐ plating languages and DSLs are a dime-a-dozen and everyone has a favorite. Building the expected data structure is the job of Salt renderers and they are dead-simple to write. In this tutorial we will be using YAML in Jinja2 templates, which is the default format. The default can be changed by editing renderer in the master configura‐ tion file. Install the package Next, let's run the state we created. Open a terminal on the master and run: % salt '*' state.highstate Our master is instructing all targeted minions to run state.highstate. When a minion exe‐ cutes a highstate call it will download the top file and attempt to match the expressions. When it does match an expression the modules listed for it will be downloaded, compiled, and executed. Once completed, the minion will report back with a summary of all actions taken and all changes made. WARNING: If you have created custom grain modules, they will not be available in the top file until after the first highstate. To make custom grains available on a minion's first highstate, it is recommended to use this example to ensure that the custom grains are synced when the minion starts. SLS File Namespace Note that in the example above, the SLS file webserver.sls was referred to sim‐ ply as webserver. The namespace for SLS files when referenced in top.sls or an include-declaration follows a few simple rules: 1. The .sls is discarded (i.e. webserver.sls becomes webserver). 2. Subdirectories can be used for better organization. a. Each subdirectory can be represented with a dot (following the python import model) or a slash. webserver/dev.sls can also be referred to as webserver.dev b. Because slashes can be represented as dots, SLS files can not contain dots in the name besides the dot for the SLS suffix. The SLS file web‐ server_1.0.sls can not be matched, and webserver_1.0 would match the directory/file webserver_1/0.sls 3. A file called init.sls in a subdirectory is referred to by the path of the direc‐ tory. So, webserver/init.sls is referred to as webserver. 4. If both webserver.sls and webserver/init.sls happen to exist, webserver/init.sls will be ignored and webserver.sls will be the file referred to as webserver. Troubleshooting Salt If the expected output isn't seen, the following tips can help to narrow down the problem. Turn up logging Salt can be quite chatty when you change the logging setting to debug: salt-minion -l debug Run the minion in the foreground By not starting the minion in daemon mode (-d) one can view any output from the minion as it works: salt-minion & Increase the default timeout value when running salt. For example, to change the default timeout to 60 seconds: salt -t 60 For best results, combine all three: salt-minion -l debug & # On the minion salt '*' state.highstate -t 60 # On the master Next steps This tutorial focused on getting a simple Salt States configuration working. Part 2 will build on this example to cover more advanced sls syntax and will explore more of the states that ship with Salt. States tutorial, part 2 - More Complex States, Requisites NOTE: This tutorial builds on topics covered in part 1. It is recommended that you begin there. In the last part of the Salt States tutorial we covered the basics of installing a pack‐ age. We will now modify our webserver.sls file to have requirements, and use even more Salt States. Call multiple States You can specify multiple state-declaration under an id-declaration. For example, a quick modification to our webserver.sls to also start Apache if it is not running: apache: pkg.installed: [] service.running: - require: - pkg: apache Try stopping Apache before running state.highstate once again and observe the output. NOTE: For those running RedhatOS derivatives (Centos, AWS), you will want to specify the ser‐ vice name to be httpd. More on state service here, service state. With the example above, just add "- name: httpd" above the require line and with the same spacing. Require other states We now have a working installation of Apache so let's add an HTML file to customize our website. It isn't exactly useful to have a website without a webserver so we don't want Salt to install our HTML file until Apache is installed and running. Include the following at the bottom of your webserver/init.sls file: apache: pkg.installed: [] service.running: - require: - pkg: apache /var/www/index.html: # ID declaration file: # state declaration - managed # function - source: salt://webserver/index.html # function arg - require: # requisite declaration - pkg: apache # requisite reference line 7 is the id-declaration. In this example it is the location we want to install our custom HTML file. (Note: the default location that Apache serves may differ from the above on your OS or distro. /srv/www could also be a likely place to look.) Line 8 the state-declaration. This example uses the Salt file state. Line 9 is the function-declaration. The managed function will download a file from the master and install it in the location specified. Line 10 is a function-arg-declaration which, in this example, passes the source argument to the managed function. Line 11 is a requisite-declaration. Line 12 is a requisite-reference which refers to a state and an ID. In this example, it is referring to the ID declaration from our example in part 1. This declaration tells Salt not to install the HTML file until Apache is installed. Next, create the index.html file and save it in the webserver directory: <!DOCTYPE html> <html> <head><title>Salt rocks</title></head> <body> <h1>This file brought to you by Salt</h1> </body> </html> Last, call state.highstate again and the minion will fetch and execute the highstate as well as our HTML file from the master using Salt's File Server: salt '*' state.highstate Verify that Apache is now serving your custom HTML. require vs. watch There are two requisite-declaration, “require”, and “watch”. Not every state supports “watch”. The service state does support “watch” and will restart a ser‐ vice based on the watch condition. For example, if you use Salt to install an Apache virtual host configuration file and want to restart Apache whenever that file is changed you could modify our Apache example from earlier as follows: /etc/httpd/extra/httpd-vhosts.conf: file.managed: - source: salt://webserver/httpd-vhosts.conf apache: pkg.installed: [] service.running: - watch: - file: /etc/httpd/extra/httpd-vhosts.conf - require: - pkg: apache If the pkg and service names differ on your OS or distro of choice you can specify each one separately using a name-declaration which explained in Part 3. Next steps In part 3 we will discuss how to use includes, extends, and templating to make a more com‐ plete State Tree configuration. States tutorial, part 3 - Templating, Includes, Extends NOTE: This tutorial builds on topics covered in part 1 and part 2. It is recommended that you begin there. This part of the tutorial will cover more advanced templating and configuration techniques for sls files. Templating SLS modules SLS modules may require programming logic or inline execution. This is accomplished with module templating. The default module templating system used is Jinja2 and may be config‐ ured by changing the renderer value in the master config. All states are passed through a templating system when they are initially read. To make use of the templating system, simply add some templating markup. An example of an sls module with templating markup may look like this: {% for usr in ['moe','larry','curly'] %} {{ usr }}: user.present {% endfor %} This templated sls file once generated will look like this: moe: user.present larry: user.present curly: user.present Here's a more complex example: # Comments in yaml start with a hash symbol. # Since jinja rendering occurs before yaml parsing, if you want to include jinja # in the comments you may need to escape them using 'jinja' comments to prevent # jinja from trying to render something which is not well-defined jinja. # e.g. # {# iterate over the Three Stooges using a {% for %}..{% endfor %} loop # with the iterator variable {{ usr }} becoming the state ID. #} {% for usr in 'moe','larry','curly' %} {{ usr }}: group: - present user: - present - gid_from_name: True - require: - group: {{ usr }} {% endfor %} Using Grains in SLS modules Often times a state will need to behave differently on different systems. Salt grains objects are made available in the template context. The grains can be used from within sls modules: apache: pkg.installed: {% if grains['os'] == 'RedHat' %} - name: httpd {% elif grains['os'] == 'Ubuntu' %} - name: apache2 {% endif %} Using Environment Variables in SLS modules You can use salt['environ.get']('VARNAME') to use an environment variable in a Salt state. MYENVVAR="world" salt-call state.template test.sls Create a file with contents from an environment variable: file.managed: - name: /tmp/hello - contents: {{ salt['environ.get']('MYENVVAR') }} Error checking: {% set myenvvar = salt['environ.get']('MYENVVAR') %} {% if myenvvar %} Create a file with contents from an environment variable: file.managed: - name: /tmp/hello - contents: {{ salt['environ.get']('MYENVVAR') }} {% else %} Fail - no environment passed in: test: A. fail_without_changes {% endif %} Calling Salt modules from templates All of the Salt modules loaded by the minion are available within the templating system. This allows data to be gathered in real time on the target system. It also allows for shell commands to be run easily from within the sls modules. The Salt module functions are also made available in the template context as salt: moe: user.present: - gid: {{ salt['file.group_to_gid']('some_group_that_exists') }} Note that for the above example to work, some_group_that_exists must exist before the state file is processed by the templating engine. Below is an example that uses the network.hw_addr function to retrieve the MAC address for eth0: salt['network.hw_addr']('eth0') Advanced SLS module syntax Lastly, we will cover some incredibly useful techniques for more complex State trees. Include declaration A previous example showed how to spread a Salt tree across several files. Similarly, req‐ uisites span multiple files by using an include-declaration. For example: python/python-libs.sls: python-dateutil: pkg.installed python/django.sls: include: - python.python-libs django: pkg.installed: - require: - pkg: python-dateutil Extend declaration You can modify previous declarations by using an extend-declaration. For example the fol‐ lowing modifies the Apache tree to also restart Apache when the vhosts file is changed: apache/apache.sls: apache: pkg.installed apache/mywebsite.sls: include: - apache.apache extend: apache: service: - running - watch: - file: /etc/httpd/extra/httpd-vhosts.conf /etc/httpd/extra/httpd-vhosts.conf: file.managed: - source: salt://apache/httpd-vhosts.conf Using extend with require or watch The extend statement works differently for require or watch. It appends to, rather than replacing the requisite component. Name declaration You can override the id-declaration by using a name-declaration. For example, the previ‐ ous example is a bit more maintainable if rewritten as follows: apache/mywebsite.sls: include: - apache.apache extend: apache: service: - running - watch: - file: mywebsite mywebsite: file.managed: - name: /etc/httpd/extra/httpd-vhosts.conf - source: salt://apache/httpd-vhosts.conf Names declaration Even more powerful is using a names-declaration to override the id-declaration for multi‐ ple states at once. This often can remove the need for looping in a template. For example, the first example in this tutorial can be rewritten without the loop: stooges: user.present: - names: - moe - larry - curly Next steps In part 4 we will discuss how to use salt's file_roots to set up a workflow in which states can be "promoted" from dev, to QA, to production. States tutorial, part 4 NOTE: This tutorial builds on topics covered in part 1, part 2 and part 3. It is recommended that you begin there. This part of the tutorial will show how to use salt's file_roots to set up a workflow in which states can be "promoted" from dev, to QA, to production. Salt fileserver path inheritance Salt's fileserver allows for more than one root directory per environment, like in the below example, which uses both a local directory and a secondary location shared to the salt master via NFS: # In the master config file (/etc/salt/master) file_roots: base: - /srv/salt - /mnt/salt-nfs/base Salt's fileserver collapses the list of root directories into a single virtual environment containing all files from each root. If the same file exists at the same relative path in more than one root, then the top-most match "wins". For example, if /srv/salt/foo.txt and /mnt/salt-nfs/base/foo.txt both exist, then salt://foo.txt will point to /srv/salt/foo.txt. NOTE: When using multiple fileserver backends, the order in which they are listed in the fileserver_backend parameter also matters. If both roots and git backends contain a file with the same relative path, and roots appears before git in the fileserver_back‐ end list, then the file in roots will "win", and the file in gitfs will be ignored. A more thorough explanation of how Salt's modular fileserver works can be found here. We recommend reading this. Environment configuration Configure a multiple-environment setup like so: file_roots: base: - /srv/salt/prod qa: - /srv/salt/qa - /srv/salt/prod dev: - /srv/salt/dev - /srv/salt/qa - /srv/salt/prod Given the path inheritance described above, files within /srv/salt/prod would be available in all environments. Files within /srv/salt/qa would be available in both qa, and dev. Finally, the files within /srv/salt/dev would only be available within the dev environ‐ ment. Based on the order in which the roots are defined, new files/states can be placed within /srv/salt/dev, and pushed out to the dev hosts for testing. Those files/states can then be moved to the same relative path within /srv/salt/qa, and they are now available only in the dev and qa environments, allowing them to be pushed to QA hosts and tested. Finally, if moved to the same relative path within /srv/salt/prod, the files are now available in all three environments. Practical Example As an example, consider a simple website, installed to /var/www/foobarcom. Below is a top.sls that can be used to deploy the website: /srv/salt/prod/top.sls: base: 'web*prod*': - webserver.foobarcom qa: 'web*qa*': - webserver.foobarcom dev: 'web*dev*': - webserver.foobarcom Using pillar, roles can be assigned to the hosts: /srv/pillar/top.sls: base: 'web*prod*': - webserver.prod 'web*qa*': - webserver.qa 'web*dev*': - webserver.dev /srv/pillar/webserver/prod.sls: webserver_role: prod /srv/pillar/webserver/qa.sls: webserver_role: qa /srv/pillar/webserver/dev.sls: webserver_role: dev And finally, the SLS to deploy the website: /srv/salt/prod/webserver/foobarcom.sls: {% if pillar.get('webserver_role', '') %} /var/www/foobarcom: file.recurse: - source: salt://webserver/src/foobarcom - env: {{ pillar['webserver_role'] }} - user: www - group: www - dir_mode: 755 - file_mode: 644 {% endif %} Given the above SLS, the source for the website should initially be placed in /srv/salt/dev/webserver/src/foobarcom. First, let's deploy to dev. Given the configuration in the top file, this can be done using state.highstate: salt --pillar 'webserver_role:dev' state.highstate However, in the event that it is not desirable to apply all states configured in the top file (which could be likely in more complex setups), it is possible to apply just the states for the foobarcom website, using state.sls: salt --pillar 'webserver_role:dev' state.sls webserver.foobarcom Once the site has been tested in dev, then the files can be moved from /srv/salt/dev/web‐ server/src/foobarcom to /srv/salt/qa/webserver/src/foobarcom, and deployed using the fol‐ lowing: salt --pillar 'webserver_role:qa' state.sls webserver.foobarcom Finally, once the site has been tested in qa, then the files can be moved from /srv/salt/qa/webserver/src/foobarcom to /srv/salt/prod/webserver/src/foobarcom, and deployed using the following: salt --pillar 'webserver_role:prod' state.sls webserver.foobarcom Thanks to Salt's fileserver inheritance, even though the files have been moved to within /srv/salt/prod, they are still available from the same salt:// URI in both the qa and dev environments. Continue Learning The best way to continue learning about Salt States is to read through the reference docu‐ mentation and to look through examples of existing state trees. Many pre-configured state trees can be found on GitHub in the saltstack-formulas collection of repositories. If you have any questions, suggestions, or just want to chat with other people who are using Salt, we have a very active community and we'd love to hear from you. In addition, by continuing to part 5, you can learn about the powerful orchestration of which Salt is capable. States Tutorial, Part 5 - Orchestration with Salt NOTE: This tutorial builds on some of the topics covered in the earlier States Walkthrough pages. It is recommended to start with Part 1 if you are not familiar with how to use states. Orchestration is accomplished in salt primarily through the Orchestrate Runner. Added in version 0.17.0, this Salt Runner can use the full suite of requisites available in states, and can also execute states/functions using salt-ssh. The Orchestrate Runner New in version 0.17.0. NOTE: Orchestrate Deprecates OverState The Orchestrate Runner (originally called the state.sls runner) offers all the func‐ tionality of the OverState, but with some advantages: · All requisites available in states can be used. · The states/functions will also work on salt-ssh minions. The Orchestrate Runner was added with the intent to eventually deprecate the OverState system, however the OverState will still be maintained until Salt 2015.8.0. The orchestrate runner generalizes the Salt state system to a Salt master context. Whereas the state.sls, state.highstate, et al functions are concurrently and independently executed on each Salt minion, the state.orchestrate runner is executed on the master, giv‐ ing it a master-level view and control over requisites, such as state ordering and condi‐ tionals. This allows for inter minion requisites, like ordering the application of states on different minions that must not happen simultaneously, or for halting the state run on all minions if a minion fails one of its states. If you want to setup a load balancer in front of a cluster of web servers, for example, you can ensure the load balancer is setup before the web servers or stop the state run altogether if one of the minions does not set up correctly. The state.sls, state.highstate, et al functions allow you to statefully manage each minion and the state.orchestrate runner allows you to statefully manage your entire infrastruc‐ ture. Executing the Orchestrate Runner The Orchestrate Runner command format is the same as for the state.sls function, except that since it is a runner, it is executed with salt-run rather than salt. Assuming you have a state.sls file called /srv/salt/orch/webserver.sls the following command run on the master will apply the states defined in that file. salt-run state.orchestrate orch.webserver NOTE: state.orch is a synonym for state.orchestrate Changed in version 2014.1.1: The runner function was renamed to state.orchestrate to avoid confusion with the state.sls execution function. In versions 0.17.0 through 2014.1.0, state.sls must be used. Examples Function To execute a function, use salt.function: # /srv/salt/orch/cleanfoo.sls cmd.run: salt.function: - tgt: '*' - arg: - rm -rf /tmp/foo salt-run state.orchestrate orch.cleanfoo State To execute a state, use salt.state. # /srv/salt/orch/webserver.sls install_nginx: salt.state: - tgt: 'web*' - sls: - nginx salt-run state.orchestrate orch.webserver Highstate To run a highstate, set highstate: True in your state config: # /srv/salt/orch/web_setup.sls webserver_setup: salt.state: - tgt: 'web*' - highstate: True salt-run state.orchestrate orch.web_setup More Complex Orchestration Many states/functions can be configured in a single file, which when combined with the full suite of requisites, can be used to easily configure complex orchestration tasks. Additionally, the states/functions will be executed in the order in which they are defined, unless prevented from doing so by any requisites, as is the default in SLS files since 0.17.0. cmd.run: salt.function: - tgt: 10.0.0.0/24 - tgt_type: ipcidr - arg: - bootstrap storage_setup: salt.state: - tgt: 'role:storage' - tgt_type: grain - sls: ceph - require: - salt: webserver_setup webserver_setup: salt.state: - tgt: 'web*' - highstate: True Given the above setup, the orchestration will be carried out as follows: 1. The shell command bootstrap will be executed on all minions in the 10.0.0.0/24 subnet. 2. A Highstate will be run on all minions whose ID starts with "web", since the stor‐ age_setup state requires it. 3. Finally, the ceph SLS target will be executed on all minions which have a grain called role with a value of storage. NOTE: Remember, salt-run is always executed on the master. Syslog-ng usage Overview Syslog_ng state module is for generating syslog-ng configurations. You can do the follow‐ ing things: · generate syslog-ng configuration from YAML, · use non-YAML configuration, · start, stop or reload syslog-ng. There is also an execution module, which can check the syntax of the configuration, get the version and other information about syslog-ng. Configuration Users can create syslog-ng configuration statements with the syslog_ng.config function. It requires a name and a config parameter. The name parameter determines the name of the gen‐ erated statement and the config parameter holds a parsed YAML structure. A statement can be declared in the following forms (both are equivalent): source.s_localhost: syslog_ng.config: - config: - tcp: - ip: "127.0.0.1" - port: 1233 s_localhost: syslog_ng.config: - config: source: - tcp: - ip: "127.0.0.1" - port: 1233 The first one is called short form, because it needs less typing. Users can use lists and dictionaries to specify their configuration. The format is quite self describing and there are more examples [at the end](#examples) of this document. Quotation The quotation can be tricky sometimes but here are some rules to follow: · when a string meant to be "string" in the generated configuration, it should be like '"string"' in the YAML document · similarly, users should write "'string'" to get 'string' in the generated config‐ uration Full example The following configuration is an example, how a complete syslog-ng configuration looks like: # Set the location of the configuration file set_location: module.run: - name: syslog_ng.set_config_file - m_name: "/home/tibi/install/syslog-ng/etc/syslog-ng.conf" # The syslog-ng and syslog-ng-ctl binaries are here. You needn't use # this method if these binaries can be found in a directory in your PATH. set_bin_path: module.run: - name: syslog_ng.set_binary_path - m_name: "/home/tibi/install/syslog-ng/sbin" # Writes the first lines into the config file, also erases its previous # content write_version: module.run: - name: syslog_ng.write_version - m_name: "3.6" # There is a shorter form to set the above variables set_variables: module.run: - name: syslog_ng.set_parameters - version: "3.6" - binary_path: "/home/tibi/install/syslog-ng/sbin" - config_file: "/home/tibi/install/syslog-ng/etc/syslog-ng.conf" # Some global options options.global_options: syslog_ng.config: - config: - time_reap: 30 - mark_freq: 10 - keep_hostname: "yes" source.s_localhost: syslog_ng.config: - config: - tcp: - ip: "127.0.0.1" - port: 1233 destination.d_log_server: syslog_ng.config: - config: - tcp: - "127.0.0.1" - port: 1234 log.l_log_to_central_server: syslog_ng.config: - config: - source: s_localhost - destination: d_log_server some_comment: module.run: - name: syslog_ng.write_config - config: | # Multi line # comment # Another mode to use comments or existing configuration snippets config.other_comment_form: syslog_ng.config: - config: | # Multi line # comment The syslog_ng.reloaded function can generate syslog-ng configuration from YAML. If the statement (source, destination, parser, etc.) has a name, this function uses the id as the name, otherwise (log statement) it's purpose is like a mandatory comment. After execution this example the syslog_ng state will generate this file: #Generated by Salt on 2014-08-18 00:11:11 @version: 3.6 options { time_reap( 30 ); mark_freq( 10 ); keep_hostname( yes ); }; source s_localhost { tcp( ip( 127.0.0.1 ), port( 1233 ) ); }; destination d_log_server { tcp( 127.0.0.1, port( 1234 ) ); }; log { source( s_localhost ); destination( d_log_server ); }; # Multi line # comment # Multi line # comment Users can include arbitrary texts in the generated configuration with using the config statement (see the example above). Syslog_ng module functions You can use syslog_ng.set_binary_path to set the directory which contains the syslog-ng and syslog-ng-ctl binaries. If this directory is in your PATH, you don't need to use this function. There is also a syslog_ng.set_config_file function to set the location of the configuration file. Examples Simple source source s_tail { file( "/var/log/apache/access.log", follow_freq(1), flags(no-parse, validate-utf8) ); }; s_tail: # Salt will call the source function of syslog_ng module syslog_ng.config: - config: source: - file: - file: ''"/var/log/apache/access.log"'' - follow_freq : 1 - flags: - no-parse - validate-utf8 OR s_tail: syslog_ng.config: - config: source: - file: - ''"/var/log/apache/access.log"'' - follow_freq : 1 - flags: - no-parse - validate-utf8 OR source.s_tail: syslog_ng.config: - config: - file: - ''"/var/log/apache/access.log"'' - follow_freq : 1 - flags: - no-parse - validate-utf8 Complex source source s_gsoc2014 { tcp( ip("0.0.0.0"), port(1234), flags(no-parse) ); }; s_gsoc2014: syslog_ng.config: - config: source: - tcp: - ip: 0.0.0.0 - port: 1234 - flags: no-parse Filter filter f_json { match( "@json:" ); }; f_json: syslog_ng.config: - config: filter: - match: - ''"@json:"'' Template template t_demo_filetemplate { template( "$ISODATE $HOST $MSG " ); template_escape( no ); }; t_demo_filetemplate: syslog_ng.config: -config: template: - template: - '"$ISODATE $HOST $MSG\n"' - template_escape: - "no" Rewrite rewrite r_set_message_to_MESSAGE { set( "${.json.message}", value("$MESSAGE") ); }; r_set_message_to_MESSAGE: syslog_ng.config: - config: rewrite: - set: - '"${.json.message}"' - value : '"$MESSAGE"' Global options options { time_reap(30); mark_freq(10); keep_hostname(yes); }; global_options: syslog_ng.config: - config: options: - time_reap: 30 - mark_freq: 10 - keep_hostname: "yes" Log log { source(s_gsoc2014); junction { channel { filter(f_json); parser(p_json); rewrite(r_set_json_tag); rewrite(r_set_message_to_MESSAGE); destination { file( "/tmp/json-input.log", template(t_gsoc2014) ); }; flags(final); }; channel { filter(f_not_json); parser { syslog-parser( ); }; rewrite(r_set_syslog_tag); flags(final); }; }; destination { file( "/tmp/all.log", template(t_gsoc2014) ); }; }; l_gsoc2014: syslog_ng.config: - config: log: - source: s_gsoc2014 - junction: - channel: - filter: f_json - parser: p_json - rewrite: r_set_json_tag - rewrite: r_set_message_to_MESSAGE - destination: - file: - '"/tmp/json-input.log"' - template: t_gsoc2014 - flags: final - channel: - filter: f_not_json - parser: - syslog-parser: [] - rewrite: r_set_syslog_tag - flags: final - destination: - file: - "/tmp/all.log" - template: t_gsoc2014 Advanced Topics SaltStack Walk-through NOTE: Welcome to SaltStack! I am excited that you are interested in Salt and starting down the path to better infrastructure management. I developed (and am continuing to develop) Salt with the goal of making the best software available to manage computers of almost any kind. I hope you enjoy working with Salt and that the software can solve your real world needs! · Thomas S Hatch · Salt creator and Chief Developer · CTO of SaltStack, Inc. Getting Started What is Salt? Salt is a different approach to infrastructure management, founded on the idea that high-speed communication with large numbers of systems can open up new capabilities. This approach makes Salt a powerful multitasking system that can solve many specific problems in an infrastructure. The backbone of Salt is the remote execution engine, which creates a high-speed, secure and bi-directional communication net for groups of systems. On top of this communication system, Salt provides an extremely fast, flexible, and easy-to-use configuration manage‐ ment system called Salt States. Installing Salt SaltStack has been made to be very easy to install and get started. The installation docu‐ ments contain instructions for all supported platforms. Starting Salt Salt functions on a master/minion topology. A master server acts as a central control bus for the clients, which are called minions. The minions connect back to the master. Setting Up the Salt Master Turning on the Salt Master is easy -- just turn it on! The default configuration is suit‐ able for the vast majority of installations. The Salt Master can be controlled by the local Linux/Unix service manager: On Systemd based platforms (OpenSuse, Fedora): systemctl start salt-master On Upstart based systems (Ubuntu, older Fedora/RHEL): service salt-master start On SysV Init systems (Debian, Gentoo etc.): /etc/init.d/salt-master start Alternatively, the Master can be started directly on the command-line: salt-master -d The Salt Master can also be started in the foreground in debug mode, thus greatly increas‐ ing the command output: salt-master -l debug The Salt Master needs to bind to two TCP network ports on the system. These ports are 4505 and 4506. For more in depth information on firewalling these ports, the firewall tutorial is available here. Setting up a Salt Minion NOTE: The Salt Minion can operate with or without a Salt Master. This walk-through assumes that the minion will be connected to the master, for information on how to run a mas‐ ter-less minion please see the master-less quick-start guide: Masterless Minion Quickstart The Salt Minion only needs to be aware of one piece of information to run, the network location of the master. By default the minion will look for the DNS name salt for the master, making the easiest approach to set internal DNS to resolve the name salt back to the Salt Master IP. Otherwise, the minion configuration file will need to be edited so that the configuration option master points to the DNS name or the IP of the Salt Master: NOTE: The default location of the configuration files is /etc/salt. Most platforms adhere to this convention, but platforms such as FreeBSD and Microsoft Windows place this file in different locations. /etc/salt/minion: master: saltmaster.example.com Now that the master can be found, start the minion in the same way as the master; with the platform init system or via the command line directly: As a daemon: salt-minion -d In the foreground in debug mode: salt-minion -l debug When the minion is started, it will generate an id value, unless it has been generated on a previous run and cached in the configuration directory, which is /etc/salt by default. This is the name by which the minion will attempt to authenticate to the master. The fol‐ lowing steps are attempted, in order to try to find a value that is not localhost: 1. The Python function socket.getfqdn() is run 2. /etc/hostname is checked (non-Windows only) 3. /etc/hosts (%WINDIR%\system32\drivers\etc\hosts on Windows hosts) is checked for host‐ names that map to anything within 127.0.0.0/8. If none of the above are able to produce an id which is not localhost, then a sorted list of IP addresses on the minion (excluding any within 127.0.0.0/8) is inspected. The first publicly-routable IP address is used, if there is one. Otherwise, the first pri‐ vately-routable IP address is used. If all else fails, then localhost is used as a fallback. NOTE: Overriding the id The minion id can be manually specified using the id parameter in the minion config file. If this configuration value is specified, it will override all other sources for the id. Now that the minion is started, it will generate cryptographic keys and attempt to connect to the master. The next step is to venture back to the master server and accept the new minion's public key. Using salt-key Salt authenticates minions using public-key encryption and authentication. For a minion to start accepting commands from the master, the minion keys need to be accepted by the mas‐ ter. The salt-key command is used to manage all of the keys on the master. To list the keys that are on the master: salt-key -L The keys that have been rejected, accepted, and pending acceptance are listed. The easi‐ est way to accept the minion key is to accept all pending keys: salt-key -A NOTE: Keys should be verified! Print the master key fingerprint by running salt-key -F master on the Salt master. Copy the master.pub fingerprint from the Local Keys section, and then set this value as the master_finger in the minion configuration file. Restart the Salt minion. On the master, run salt-key -f minion-id to print the fingerprint of the minion's pub‐ lic key that was received by the master. On the minion, run salt-call key.finger --local to print the fingerprint of the minion key. On the master: # salt-key -f foo.domain.com Unaccepted Keys: foo.domain.com: 39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9 On the minion: # salt-call key.finger --local local: 39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9 If they match, approve the key with salt-key -a foo.domain.com. Sending the First Commands Now that the minion is connected to the master and authenticated, the master can start to command the minion. Salt commands allow for a vast set of functions to be executed and for specific minions and groups of minions to be targeted for execution. The salt command is comprised of command options, target specification, the function to execute, and arguments to the function. A simple command to start with looks like this: salt '*' test.ping The * is the target, which specifies all minions. test.ping tells the minion to run the test.ping function. In the case of test.ping, test refers to a execution module. ping refers to the ping function contained in the aforementioned test module. NOTE: Execution modules are the workhorses of Salt. They do the work on the system to perform various tasks, such as manipulating files and restarting services. The result of running this command will be the master instructing all of the minions to execute test.ping in parallel and return the result. This is not an actual ICMP ping, but rather a simple function which returns True. Using test.ping is a good way of confirming that a minion is connected. NOTE: Each minion registers itself with a unique minion ID. This ID defaults to the minion's hostname, but can be explicitly defined in the minion config as well by using the id parameter. Of course, there are hundreds of other modules that can be called just as test.ping can. For example, the following would return disk usage on all targeted minions: salt '*' disk.usage Getting to Know the Functions Salt comes with a vast library of functions available for execution, and Salt functions are self-documenting. To see what functions are available on the minions execute the sys.doc function: salt '*' sys.doc This will display a very large list of available functions and documentation on them. NOTE: Module documentation is also available on the web. These functions cover everything from shelling out to package management to manipulating database servers. They comprise a powerful system management API which is the backbone to Salt configuration management and many other aspects of Salt. NOTE: Salt comes with many plugin systems. The functions that are available via the salt com‐ mand are called Execution Modules. Helpful Functions to Know The cmd module contains functions to shell out on minions, such as cmd.run and cmd.run_all: salt '*' cmd.run 'ls -l /etc' The pkg functions automatically map local system package managers to the same salt func‐ tions. This means that pkg.install will install packages via yum on Red Hat based systems, apt on Debian systems, etc.: salt '*' pkg.install vim NOTE: Some custom Linux spins and derivatives of other distributions are not properly detected by Salt. If the above command returns an error message saying that pkg.install is not available, then you may need to override the pkg provider. This process is explained here. The network.interfaces function will list all interfaces on a minion, along with their IP addresses, netmasks, MAC addresses, etc: salt '*' network.interfaces Changing the Output Format The default output format used for most Salt commands is called the nested outputter, but there are several other outputters that can be used to change the way the output is dis‐ played. For instance, the pprint outputter can be used to display the return data using Python's pprint module: root@saltmaster:~# salt myminion grains.item pythonpath --out=pprint {'myminion': {'pythonpath': ['/usr/lib64/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/site-packages', '/usr/lib/python2.7/site-packages/gst-0.10', '/usr/lib/python2.7/site-packages/gtk-2.0']}} The full list of Salt outputters, as well as example output, can be found here. salt-call The examples so far have described running commands from the Master using the salt com‐ mand, but when troubleshooting it can be more beneficial to login to the minion directly and use salt-call. Doing so allows you to see the minion log messages specific to the command you are running (which are not part of the return data you see when running the command from the Master using salt), making it unnecessary to tail the minion log. More information on salt-call and how to use it can be found here. Grains Salt uses a system called Grains to build up static data about minions. This data includes information about the operating system that is running, CPU architecture and much more. The grains system is used throughout Salt to deliver platform data to many components and to users. Grains can also be statically set, this makes it easy to assign values to minions for grouping and managing. A common practice is to assign grains to minions to specify what the role or roles a min‐ ion might be. These static grains can be set in the minion configuration file or via the grains.setval function. Targeting Salt allows for minions to be targeted based on a wide range of criteria. The default targeting system uses globular expressions to match minions, hence if there are minions named larry1, larry2, curly1, and curly2, a glob of larry* will match larry1 and larry2, and a glob of *1 will match larry1 and curly1. Many other targeting systems can be used other than globs, these systems include: Regular Expressions Target using PCRE-compliant regular expressions Grains Target based on grains data: Targeting with Grains Pillar Target based on pillar data: Targeting with Pillar IP Target based on IP address/subnet/range Compound Create logic to target based on multiple targets: Targeting with Compound Nodegroup Target with nodegroups: Targeting with Nodegroup The concepts of targets are used on the command line with Salt, but also function in many other areas as well, including the state system and the systems used for ACLs and user permissions. Passing in Arguments Many of the functions available accept arguments which can be passed in on the command line: salt '*' pkg.install vim This example passes the argument vim to the pkg.install function. Since many functions can accept more complex input than just a string, the arguments are parsed through YAML, allowing for more complex data to be sent on the command line: salt '*' test.echo 'foo: bar' In this case Salt translates the string 'foo: bar' into the dictionary "{'foo': 'bar'}" NOTE: Any line that contains a newline will not be parsed by YAML. Salt States Now that the basics are covered the time has come to evaluate States. Salt States, or the State System is the component of Salt made for configuration management. The state system is already available with a basic Salt setup, no additional configuration is required. States can be set up immediately. NOTE: Before diving into the state system, a brief overview of how states are constructed will make many of the concepts clearer. Salt states are based on data modeling and build on a low level data structure that is used to execute each state function. Then more logical layers are built on top of each other. The high layers of the state system which this tutorial will cover consists of every‐ thing that needs to be known to use states, the two high layers covered here are the sls layer and the highest layer highstate. Understanding the layers of data management in the State System will help with under‐ standing states, but they never need to be used. Just as understanding how a compiler functions assists when learning a programming language, understanding what is going on under the hood of a configuration management system will also prove to be a valuable asset. The First SLS Formula The state system is built on SLS formulas. These formulas are built out in files on Salt's file server. To make a very basic SLS formula open up a file under /srv/salt named vim.sls. The following state ensures that vim is installed on a system to which that state has been applied. /srv/salt/vim.sls: vim: pkg.installed Now install vim on the minions by calling the SLS directly: salt '*' state.sls vim This command will invoke the state system and run the vim SLS. Now, to beef up the vim SLS formula, a vimrc can be added: /srv/salt/vim.sls: vim: pkg.installed: [] /etc/vimrc: file.managed: - source: salt://vimrc - mode: 644 - user: root - group: root Now the desired vimrc needs to be copied into the Salt file server to /srv/salt/vimrc. In Salt, everything is a file, so no path redirection needs to be accounted for. The vimrc file is placed right next to the vim.sls file. The same command as above can be executed to all the vim SLS formulas and now include managing the file. NOTE: Salt does not need to be restarted/reloaded or have the master manipulated in any way when changing SLS formulas. They are instantly available. Adding Some Depth Obviously maintaining SLS formulas right in a single directory at the root of the file server will not scale out to reasonably sized deployments. This is why more depth is required. Start by making an nginx formula a better way, make an nginx subdirectory and add an init.sls file: /srv/salt/nginx/init.sls: nginx: pkg.installed: [] service.running: - require: - pkg: nginx A few concepts are introduced in this SLS formula. First is the service statement which ensures that the nginx service is running. Of course, the nginx service can't be started unless the package is installed -- hence the require statement which sets up a dependency between the two. The require statement makes sure that the required component is executed before and that it results in success. NOTE: The require option belongs to a family of options called requisites. Requisites are a powerful component of Salt States, for more information on how requisites work and what is available see: Requisites Also evaluation ordering is available in Salt as well: Ordering States This new sls formula has a special name -- init.sls. When an SLS formula is named init.sls it inherits the name of the directory path that contains it. This formula can be referenced via the following command: salt '*' state.sls nginx NOTE: Reminder! Just as one could call the test.ping or disk.usage execution modules, state.sls is sim‐ ply another execution module. It simply takes the name of an SLS file as an argument. Now that subdirectories can be used, the vim.sls formula can be cleaned up. To make things more flexible, move the vim.sls and vimrc into a new subdirectory called edit and change the vim.sls file to reflect the change: /srv/salt/edit/vim.sls: vim: pkg.installed /etc/vimrc: file.managed: - source: salt://edit/vimrc - mode: 644 - user: root - group: root Only the source path to the vimrc file has changed. Now the formula is referenced as edit.vim because it resides in the edit subdirectory. Now the edit subdirectory can con‐ tain formulas for emacs, nano, joe or any other editor that may need to be deployed. Next Reading Two walk-throughs are specifically recommended at this point. First, a deeper run through States, followed by an explanation of Pillar. 1. Starting States 2. Pillar Walkthrough An understanding of Pillar is extremely helpful in using States. Getting Deeper Into States Two more in-depth States tutorials exist, which delve much more deeply into States func‐ tionality. 1. How Do I Use Salt States?, covers much more to get off the ground with States. 2. The States Tutorial also provides a fantastic introduction. These tutorials include much more in-depth information including templating SLS formulas etc. So Much More! This concludes the initial Salt walk-through, but there are many more things still to learn! These documents will cover important core aspects of Salt: · Pillar · Job Management A few more tutorials are also available: · Remote Execution Tutorial · Standalone Minion This still is only scratching the surface, many components such as the reactor and event systems, extending Salt, modular components and more are not covered here. For an overview of all Salt features and documentation, look at the Table of Contents. running salt as normal user tutorial Before continuing make sure you have a working Salt installation by following the instal‐ lation and the configuration instructions. Stuck? There are many ways to get help from the Salt community including our mailing list and our IRC channel #salt. Running Salt functions as non root user If you don't want to run salt cloud as root or even install it you can configure it to have a virtual root in your working directory. The salt system uses the salt.syspath module to find the variables If you run the salt-build, it will generated in: ./build/lib.linux-x86_64-2.7/salt/_syspaths.py To generate it, run the command: python setup.py build Copy the generated module into your salt directory cp ./build/lib.linux-x86_64-2.7/salt/_syspaths.py salt/_syspaths.py Edit it to include needed variables and your new paths # you need to edit this ROOT_DIR = *your current dir* + '/salt/root' # you need to edit this INSTALL_DIR = *location of source code* CONFIG_DIR = ROOT_DIR + '/etc/salt' CACHE_DIR = ROOT_DIR + '/var/cache/salt' SOCK_DIR = ROOT_DIR + '/var/run/salt' SRV_ROOT_DIR= ROOT_DIR + '/srv' BASE_FILE_ROOTS_DIR = ROOT_DIR + '/srv/salt' BASE_PILLAR_ROOTS_DIR = ROOT_DIR + '/srv/pillar' BASE_MASTER_ROOTS_DIR = ROOT_DIR + '/srv/salt-master' LOGS_DIR = ROOT_DIR + '/var/log/salt' PIDFILE_DIR = ROOT_DIR + '/var/run' CLOUD_DIR = INSTALL_DIR + '/cloud' BOOTSTRAP = CLOUD_DIR + '/deploy/bootstrap-salt.sh' Create the directory structure mkdir -p root/etc/salt root/var/cache/run root/run/salt root/srv root/srv/salt root/srv/pillar root/srv/salt-master root/var/log/salt root/var/run Populate the configuration files: cp -r conf/* root/etc/salt/ Edit your root/etc/salt/master configuration that is used by salt-cloud: user: *your user name* Run like this: PYTHONPATH=`pwd` scripts/salt-cloud MinionFS Backend Walkthrough Propagating Files New in version 2014.1.0. Sometimes, one might need to propagate files that are generated on a minion. Salt already has a feature to send files from a minion to the master. Enabling File Propagation To enable propagation, the file_recv option needs to be set to True. file_recv: True These changes require a restart of the master, then new requests for the salt://minion-id/ protocol will send files that are pushed by cp.push from minion-id to the master. salt 'minion-id' cp.push /path/to/the/file This command will store the file, including its full path, under cachedir /master/min‐ ions/minion-id/files. With the default cachedir the example file above would be stored as /var/cache/salt/master/minions/minion-id/files/path/to/the/file. NOTE: This walkthrough assumes basic knowledge of Salt and cp.push. To get up to speed, check out the walkthrough. MinionFS Backend Since it is not a good idea to expose the whole cachedir, MinionFS should be used to send these files to other minions. Simple Configuration To use the minionfs backend only two configuration changes are required on the master. The fileserver_backend option needs to contain a value of minion and file_recv needs to be set to true: fileserver_backend: - roots - minion file_recv: True These changes require a restart of the master, then new requests for the salt://minion-id/ protocol will send files that are pushed by cp.push from minion-id to the master. NOTE: All of the files that are pushed to the master are going to be available to all of the minions. If this is not what you want, please remove minion from fileserver_backend in the master config file. NOTE: Having directories with the same name as your minions in the root that can be accessed like salt://minion-id/ might cause confusion. Commandline Example Lets assume that we are going to generate SSH keys on a minion called minion-source and put the public part in ~/.ssh/authorized_keys of root user of a minion called minion-des‐ tination. First, lets make sure that /root/.ssh exists and has the right permissions: [root@salt-master file]# salt '*' file.mkdir dir_path=/root/.ssh user=root group=root mode ↲ =700 minion-source: None minion-destination: None We create an RSA key pair without a passphrase [*]: [root@salt-master file]# salt 'minion-source' cmd.run 'ssh-keygen -N "" -f /root/.ssh/id_r ↲ sa' minion-source: Generating public/private rsa key pair. Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 9b:cd:1c:b9:c2:93:8e:ad:a3:52:a0:8b:0a:cc:d4:9b root@minion-source The key's randomart image is: +--[ RSA 2048]----+ | | | | | | | o . | | o o S o | |= + . B o | |o+ E B = | |+ . .+ o | |o ...ooo | +-----------------+ and we send the public part to the master to be available to all minions: [root@salt-master file]# salt 'minion-source' cp.push /root/.ssh/id_rsa.pub minion-source: True now it can be seen by everyone: [root@salt-master file]# salt 'minion-destination' cp.list_master_dirs minion-destination: - . - etc - minion-source/root - minion-source/root/.ssh Lets copy that as the only authorized key to minion-destination: [root@salt-master file]# salt 'minion-destination' cp.get_file salt://minion-source/root/. ↲ ssh/id_rsa.pub /root/.ssh/authorized_keys minion-destination: /root/.ssh/authorized_keys Or we can use a more elegant and salty way to add an SSH key: [root@salt-master file]# salt 'minion-destination' ssh.set_auth_key_from_file user=root so ↲ urce=salt://minion-source/root/.ssh/id_rsa.pub minion-destination: new [*] Yes, that was the actual key on my server, but the server is already destroyed. Automatic Updates / Frozen Deployments New in version 0.10.3.d. Salt has support for the Esky application freezing and update tool. This tool allows one to build a complete zipfile out of the salt scripts and all their dependencies - including shared objects / DLLs. Getting Started To build frozen applications, suitable build environment will be needed for each platform. You should probably set up a virtualenv in order to limit the scope of Q/A. This process does work on Windows. Directions are available at https://github.com/saltstack/salt-windows-install for details on installing Salt in Win‐ dows. Only the 32-bit Python and dependencies have been tested, but they have been tested on 64-bit Windows. Install bbfreeze, and then esky from PyPI in order to enable the bdist_esky command in setup.py. Salt itself must also be installed, in addition to its dependencies. Building and Freezing Once you have your tools installed and the environment configured, use setup.py to prepare the distribution files. python setup.py sdist python setup.py bdist Once the distribution files are in place, Esky can be used traverse the module tree and pack all the scripts up into a redistributable. python setup.py bdist_esky There will be an appropriately versioned salt-VERSION.zip in dist/ if everything went smoothly. Windows C:\Python27\lib\site-packages\zmq will need to be added to the PATH variable. This helps bbfreeze find the zmq DLL so it can pack it up. Using the Frozen Build Unpack the zip file in the desired install location. Scripts like salt-minion and salt-call will be in the root of the zip file. The associated libraries and bootstrapping will be in the directories at the same level. (Check the Esky documentation for more information) To support updating your minions in the wild, put the builds on a web server that the min‐ ions can reach. salt.modules.saltutil.update() will trigger an update and (optionally) a restart of the minion service under the new version. Troubleshooting A Windows minion isn't responding The process dispatch on Windows is slower than it is on *nix. It may be necessary to add '-t 15' to salt commands to give minions plenty of time to return. Windows and the Visual Studio Redist The Visual C++ 2008 32-bit redistributable will need to be installed on all Windows min‐ ions. Esky has an option to pack the library into the zipfile, but OpenSSL does not seem to acknowledge the new location. If a no OPENSSL_Applink error appears on the console when trying to start a frozen minion, the redistributable is not installed. Mixed Linux environments and Yum The Yum Python module doesn't appear to be available on any of the standard Python package mirrors. If RHEL/CentOS systems need to be supported, the frozen build should created on that platform to support all the Linux nodes. Remember to build the virtualenv with --sys‐ tem-site-packages so that the yum module is included. Automatic (Python) module discovery Automatic (Python) module discovery does not work with the late-loaded scheme that Salt uses for (Salt) modules. Any misbehaving modules will need to be explicitly added to the freezer_includes in Salt's setup.py. Always check the zipped application to make sure that the necessary modules were included. Multi Master Tutorial As of Salt 0.16.0, the ability to connect minions to multiple masters has been made avail‐ able. The multi-master system allows for redundancy of Salt masters and facilitates multi‐ ple points of communication out to minions. When using a multi-master setup, all masters are running hot, and any active master can be used to send commands out to the minions. NOTE: If you need failover capabilities with multiple masters, there is also a MultiMas‐ ter-PKI setup available, that uses a different topology MultiMaster-PKI with Failover Tutorial In 0.16.0, the masters do not share any information, keys need to be accepted on both mas‐ ters, and shared files need to be shared manually or use tools like the git fileserver backend to ensure that the file_roots are kept consistent. Summary of Steps 1. Create a redundant master server 2. Copy primary master key to redundant master 3. Start redundant master 4. Configure minions to connect to redundant master 5. Restart minions 6. Accept keys on redundant master Prepping a Redundant Master The first task is to prepare the redundant master. If the redundant master is already run‐ ning, stop it. There is only one requirement when preparing a redundant master, which is that masters share the same private key. When the first master was created, the master's identifying key pair was generated and placed in the master's pki_dir. The default loca‐ tion of the master's key pair is /etc/salt/pki/master/. Take the private key, master.pem, and copy it to the same location on the redundant master. Do the same for the master's public key, master.pub. Assuming that no minions have yet been connected to the new redun‐ dant master, it is safe to delete any existing key in this location and replace it. NOTE: There is no logical limit to the number of redundant masters that can be used. Once the new key is in place, the redundant master can be safely started. Configure Minions Since minions need to be master-aware, the new master needs to be added to the minion con‐ figurations. Simply update the minion configurations to list all connected masters: master: - saltmaster1.example.com - saltmaster2.example.com Now the minion can be safely restarted. NOTE: If the ipc_mode for the minion is set to TCP (default in Windows), then each minion in the multi-minion setup (one per master) needs its own tcp_pub_port and tcp_pull_port. If these settings are left as the default 4510/4511, each minion object will receive a port 2 higher than the previous. Thus the first minion will get 4510/4511, the second will get 4512/4513, and so on. If these port decisions are unacceptable, you must con‐ figure tcp_pub_port and tcp_pull_port with lists of ports for each master. The length of these lists should match the number of masters, and there should not be overlap in the lists. Now the minions will check into the original master and also check into the new redundant master. Both masters are first-class and have rights to the minions. NOTE: Minions can automatically detect failed masters and attempt to reconnect to reconnect to them quickly. To enable this functionality, set master_alive_interval in the minion config and specify a number of seconds to poll the masters for connection status. If this option is not set, minions will still reconnect to failed masters but the first command sent after a master comes back up may be lost while the minion authenticates. Sharing Files Between Masters Salt does not automatically share files between multiple masters. A number of files should be shared or sharing of these files should be strongly considered. Minion Keys Minion keys can be accepted the normal way using salt-key on both masters. Keys accepted, deleted, or rejected on one master will NOT be automatically managed on redundant masters; this needs to be taken care of by running salt-key on both masters or sharing the /etc/salt/pki/master/{minions,minions_pre,minions_rejected} directories between masters. NOTE: While sharing the /etc/salt/pki/master directory will work, it is strongly discouraged, since allowing access to the master.pem key outside of Salt creates a SERIOUS security risk. File_Roots The file_roots contents should be kept consistent between masters. Otherwise state runs will not always be consistent on minions since instructions managed by one master will not agree with other masters. The recommended way to sync these is to use a fileserver backend like gitfs or to keep these files on shared storage. Pillar_Roots Pillar roots should be given the same considerations as file_roots. Master Configurations While reasons may exist to maintain separate master configurations, it is wise to remember that each master maintains independent control over minions. Therefore, access controls should be in sync between masters unless a valid reason otherwise exists to keep them inconsistent. These access control options include but are not limited to: · external_auth · client_acl · peer · peer_run Multi-Master-PKI Tutorial With Failover This tutorial will explain, how to run a salt-environment where a single minion can have multiple masters and fail-over between them if its current master fails. The individual steps are · setup the master(s) to sign its auth-replies · setup minion(s) to verify master-public-keys · enable multiple masters on minion(s) · enable master-check on minion(s) Please note, that it is advised to have good knowledge of the salt- authentication and communication-process to understand this tutorial. All of the settings described here, go on top of the default authentication/communication process. Motivation The default behaviour of a salt-minion is to connect to a master and accept the masters public key. With each publication, the master sends his public-key for the minion to check and if this public-key ever changes, the minion complains and exits. Practically this means, that there can only be a single master at any given time. Would it not be much nicer, if the minion could have any number of masters (1:n) and jump to the next master if its current master died because of a network or hardware failure? NOTE: There is also a MultiMaster-Tutorial with a different approach and topology than this one, that might also suite your needs or might even be better suited Multi-Master Tuto‐ rial It is also desirable, to add some sort of authenticity-check to the very first public key a minion receives from a master. Currently a minions takes the first masters public key for granted. The Goal Setup the master to sign the public key it sends to the minions and enable the minions to verify this signature for authenticity. Prepping the master to sign its public key For signing to work, both master and minion must have the signing and/or verification set‐ tings enabled. If the master signs the public key but the minion does not verify it, the minion will complain and exit. The same happens, when the master does not sign but the minion tries to verify. The easiest way to have the master sign its public key is to set master_sign_pubkey: True After restarting the salt-master service, the master will automatically generate a new key-pair master_sign.pem master_sign.pub A custom name can be set for the signing key-pair by setting master_sign_key_name: <name_without_suffix> The master will then generate that key-pair upon restart and use it for creating the pub‐ lic keys signature attached to the auth-reply. The computation is done for every auth-request of a minion. If many minions auth very often, it is advised to use conf_master:master_pubkey_signature and conf_master:mas‐ ter_use_pubkey_signature settings described below. If multiple masters are in use and should sign their auth-replies, the signing key-pair master_sign.* has to be copied to each master. Otherwise a minion will fail to verify the masters public when connecting to a different master than it did initially. That is because the public keys signature was created with a different signing key-pair. Prepping the minion to verify received public keys The minion must have the public key (and only that one!) available to be able to verify a signature it receives. That public key (defaults to master_sign.pub) must be copied from the master to the minions pki-directory. /etc/salt/pki/minion/master_sign.pub DO NOT COPY THE master_sign.pem FILE. IT MUST STAY ON THE MASTER AND ONLY THERE! When that is done, enable the signature checking in the minions configuration verify_master_pubkey_sign: True and restart the minion. For the first try, the minion should be run in manual debug mode. $ salt-minion -l debug Upon connecting to the master, the following lines should appear on the output: [DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.10 [DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem [DEBUG ] salt.crypt.verify_signature: Loading public key [DEBUG ] salt.crypt.verify_signature: Verifying signature [DEBUG ] Successfully verified signature of master public key with verification public k ↲ ey master_sign.pub [INFO ] Received signed and verified master pubkey from master 172.16.0.10 [DEBUG ] Decrypting the current master AES key If the signature verification fails, something went wrong and it will look like this [DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.10 [DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem [DEBUG ] salt.crypt.verify_signature: Loading public key [DEBUG ] salt.crypt.verify_signature: Verifying signature [DEBUG ] Failed to verify signature of public key [CRITICAL] The Salt Master server's public key did not authenticate! In a case like this, it should be checked, that the verification pubkey (master_sign.pub) on the minion is the same as the one on the master. Once the verification is successful, the minion can be started in daemon mode again. For the paranoid among us, its also possible to verify the public whenever it is received from the master. That is, for every single auth-attempt which can be quite frequent. For example just the start of the minion will force the signature to be checked 6 times for various things like auth, mine, highstate, etc. If that is desired, enable the setting always_verify_signature: True Multiple Masters For A Minion Configuring multiple masters on a minion is done by specifying two settings: · a list of masters addresses · what type of master is defined master: - 172.16.0.10 - 172.16.0.11 - 172.16.0.12 master_type: failover This tells the minion that all the master above are available for it to connect to. When started with this configuration, it will try the master in the order they are defined. To randomize that order, set master_shuffle: True The master-list will then be shuffled before the first connection attempt. The first master that accepts the minion, is used by the minion. If the master does not yet know the minion, that counts as accepted and the minion stays on that master. For the minion to be able to detect if its still connected to its current master enable the check for it master_alive_interval: <seconds> If the loss of the connection is detected, the minion will temporarily remove the failed master from the list and try one of the other masters defined (again shuffled if that is enabled). Testing the setup At least two running masters are needed to test the failover setup. Both masters should be running and the minion should be running on the command line in debug mode $ salt-minion -l debug The minion will connect to the first master from its master list [DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.10 [DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem [DEBUG ] salt.crypt.verify_signature: Loading public key [DEBUG ] salt.crypt.verify_signature: Verifying signature [DEBUG ] Successfully verified signature of master public key with verification public k ↲ ey master_sign.pub [INFO ] Received signed and verified master pubkey from master 172.16.0.10 [DEBUG ] Decrypting the current master AES key A test.ping on the master the minion is currently connected to should be run to test con‐ nectivity. If successful, that master should be turned off. A firewall-rule denying the minions pack‐ ets will also do the trick. Depending on the configured conf_minion:master_alive_interval, the minion will notice the loss of the connection and log it to its logfile. [INFO ] Connection to master 172.16.0.10 lost [INFO ] Trying to tune in to next master from master-list The minion will then remove the current master from the list and try connecting to the next master [INFO ] Removing possibly failed master 172.16.0.10 from list of masters [WARNING ] Master ip address changed from 172.16.0.10 to 172.16.0.11 [DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.11 If everything is configured correctly, the new masters public key will be verified suc‐ cessfully [DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem [DEBUG ] salt.crypt.verify_signature: Loading public key [DEBUG ] salt.crypt.verify_signature: Verifying signature [DEBUG ] Successfully verified signature of master public key with verification public k ↲ ey master_sign.pub the authentication with the new master is successful [INFO ] Received signed and verified master pubkey from master 172.16.0.11 [DEBUG ] Decrypting the current master AES key [DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem [INFO ] Authentication with master successful! and the minion can be pinged again from its new master. Performance Tuning With the setup described above, the master computes a signature for every auth-request of a minion. With many minions and many auth-requests, that can chew up quite a bit of CPU-Power. To avoid that, the master can use a pre-created signature of its public-key. The signa‐ ture is saved as a base64 encoded string which the master reads once when starting and attaches only that string to auth-replies. Enabling this also gives paranoid users the possibility, to have the signing key-pair on a different system than the actual salt-master and create the public keys signature there. Probably on a system with more restrictive firewall rules, without internet access, less users, etc. That signature can be created with $ salt-key --gen-signature This will create a default signature file in the master pki-directory /etc/salt/pki/master/master_pubkey_signature It is a simple text-file with the binary-signature converted to base64. If no signing-pair is present yet, this will auto-create the signing pair and the signa‐ ture file in one call $ salt-key --gen-signature --auto-create Telling the master to use the pre-created signature is done with master_use_pubkey_signature: True That requires the file 'master_pubkey_signature' to be present in the masters pki-direc‐ tory with the correct signature. If the signature file is named differently, its name can be set with master_pubkey_signature: <filename> With many masters and many public-keys (default and signing), it is advised to use the salt-masters hostname for the signature-files name. Signatures can be easily confused because they do not provide any information about the key the signature was created from. Verifying that everything works is done the same way as above. How the signing and verification works The default key-pair of the salt-master is /etc/salt/pki/master/master.pem /etc/salt/pki/master/master.pub To be able to create a signature of a message (in this case a public-key), another key-pair has to be added to the setup. Its default name is: master_sign.pem master_sign.pub The combination of the master.* and master_sign.* key-pairs give the possibility of gener‐ ating signatures. The signature of a given message is unique and can be verified, if the public-key of the signing-key-pair is available to the recipient (the minion). The signature of the masters public-key in master.pub is computed with master_sign.pem master.pub M2Crypto.EVP.sign_update() This results in a binary signature which is converted to base64 and attached to the auth-reply send to the minion. With the signing-pairs public-key available to the minion, the attached signature can be verified with master_sign.pub master.pub M2Cryptos EVP.verify_update(). When running multiple masters, either the signing key-pair has to be present on all of them, or the master_pubkey_signature has to be pre-computed for each master individually (because they all have different public-keys). DO NOT PUT THE SAME master.pub ON ALL MASTERS FOR EASE OF USE. Preseed Minion with Accepted Key In some situations, it is not convenient to wait for a minion to start before accepting its key on the master. For instance, you may want the minion to bootstrap itself as soon as it comes online. You may also want to to let your developers provision new development machines on the fly. SEE ALSO: Many ways to preseed minion keys Salt has other ways to generate and pre-accept minion keys in addition to the manual steps outlined below. salt-cloud performs these same steps automatically when new cloud VMs are created (unless instructed not to). salt-api exposes an HTTP call to Salt's REST API to generate and download the new min‐ ion keys as a tarball. There is a general four step process to do this: 1. Generate the keys on the master: root@saltmaster# salt-key --gen-keys=[key_name] Pick a name for the key, such as the minion's id. 2. Add the public key to the accepted minion folder: root@saltmaster# cp key_name.pub /etc/salt/pki/master/minions/[minion_id] It is necessary that the public key file has the same name as your minion id. This is how Salt matches minions with their keys. Also note that the pki folder could be in a differ‐ ent location, depending on your OS or if specified in the master config file. 3. Distribute the minion keys. There is no single method to get the keypair to your minion. The difficulty is finding a distribution method which is secure. For Amazon EC2 only, an AWS best practice is to use IAM Roles to pass credentials. (See blog post, http://blogs.aws.amazon.com/security/post/Tx610S2MLVZWEA/Using-IAM-roles-to-distribute-non-AW ↲ S-credentials-to-your-EC2-instances ) Security Warning Since the minion key is already accepted on the master, distributing the private key poses a potential security risk. A malicious party will have access to your entire state tree and other sensitive data if they gain access to a preseeded minion key. 4. Preseed the Minion with the keys You will want to place the minion keys before starting the salt-minion daemon: /etc/salt/pki/minion/minion.pem /etc/salt/pki/minion/minion.pub Once in place, you should be able to start salt-minion and run salt-call state.highstate or any other salt commands that require master authentication. Salt Bootstrap The Salt Bootstrap script allows for a user to install the Salt Minion or Master on a variety of system distributions and versions. This shell script known as bootstrap-salt.sh runs through a series of checks to determine the operating system type and version. It then installs the Salt binaries using the appropriate methods. The Salt Bootstrap script installs the minimum number of packages required to run Salt. This means that in the event you run the bootstrap to install via package, Git will not be installed. Installing the minimum number of packages helps ensure the script stays as lightweight as possible, assuming the user will install any other required packages after the Salt binaries are present on the system. The script source is available on GitHub: https://github.com/saltstack/salt-bootstrap Supported Operating Systems · Amazon Linux 2012.09 · Arch · CentOS 5/6/7 · Debian 6/7/8 · Fedora 17/18/20/21/22 · FreeBSD 9.1/9.2/10/11 · Gentoo · Linaro · Linux Mint 13/14 · OpenSUSE 12/13 · Oracle Linux 5/5 · Red Hat 5/6 · Red Hat Enterprise 5/6 · Scientific Linux 5/6 · SmartOS · SUSE Linux Enterprise 11 SP1/11 SP2/11 SP3 · Ubuntu 10.x/11.x/12.x/13.x/14.x/15.04 · Elementary OS 0.2 NOTE: In the event you do not see your distribution or version available please review the develop branch on GitHub as it main contain updates that are not present in the stable release: https://github.com/saltstack/salt-bootstrap/tree/develop Example Usage If you're looking for the one-liner to install salt, please scroll to the bottom and use the instructions for Installing via an Insecure One-Liner NOTE: In every two-step example, you would be well-served to examine the downloaded file and examine it to ensure that it does what you expect. Using curl to install latest git: curl -L https://bootstrap.saltstack.com -o install_salt.sh sudo sh install_salt.sh git develop Using wget to install your distribution's stable packages: wget -O install_salt.sh https://bootstrap.saltstack.com sudo sh install_salt.sh Install a specific version from git using wget: wget -O install_salt.sh https://bootstrap.saltstack.com sudo sh install_salt.sh -P git v0.16.4 If you already have python installed, python 2.6, then it's as easy as: python -m urllib "https://bootstrap.saltstack.com" > install_salt.sh sudo sh install_salt.sh git develop All python versions should support the following one liner: python -c 'import urllib; print urllib.urlopen("https://bootstrap.saltstack.com").read()' ↲ > install_salt.sh sudo sh install_salt.sh git develop On a FreeBSD base system you usually don't have either of the above binaries available. You do have fetch available though: fetch -o install_salt.sh https://bootstrap.saltstack.com sudo sh install_salt.sh If all you want is to install a salt-master using latest git: curl -o install_salt.sh -L https://bootstrap.saltstack.com sudo sh install_salt.sh -M -N git develop If you want to install a specific release version (based on the git tags): curl -o install_salt.sh -L https://bootstrap.saltstack.com sudo sh install_salt.sh git v0.16.4 To install a specific branch from a git fork: curl -o install_salt.sh -L https://bootstrap.saltstack.com sudo sh install_salt.sh -g https://github.com/myuser/salt.git git mybranch Installing via an Insecure One-Liner The following examples illustrate how to install Salt via a one-liner. NOTE: Warning! These methods do not involve a verification step and assume that the delivered file is trustworthy. Examples Installing the latest develop branch of Salt: curl -L https://bootstrap.saltstack.com | sudo sh -s -- git develop Any of the example above which use two-lines can be made to run in a single-line configu‐ ration with minor modifications. Example Usage The Salt Bootstrap script has a wide variety of options that can be passed as well as sev‐ eral ways of obtaining the bootstrap script itself. For example, using curl to install your distribution's stable packages: curl -L https://bootstrap.saltstack.com | sudo sh Using wget to install your distribution's stable packages: wget -O - https://bootstrap.saltstack.com | sudo sh Installing the latest version available from git with curl: curl -L https://bootstrap.saltstack.com | sudo sh -s -- git develop Install a specific version from git using wget: wget -O - https://bootstrap.saltstack.com | sh -s -- -P git v0.16.4 If you already have python installed, python 2.6, then it's as easy as: python -m urllib "https://bootstrap.saltstack.com" | sudo sh -s -- git develop All python versions should support the following one liner: python -c 'import urllib; print urllib.urlopen("https://bootstrap.saltstack.com").read()' ↲ | \ sudo sh -s -- git develop On a FreeBSD base system you usually don't have either of the above binaries available. You do have fetch available though: fetch -o - https://bootstrap.saltstack.com | sudo sh If all you want is to install a salt-master using latest git: curl -L https://bootstrap.saltstack.com | sudo sh -s -- -M -N git develop If you want to install a specific release version (based on the git tags): curl -L https://bootstrap.saltstack.com | sudo sh -s -- git v0.16.4 Downloading the develop branch (from here standard command line options may be passed): wget https://bootstrap.saltstack.com/develop Command Line Options Here's a summary of the command line options: $ sh bootstrap-salt.sh -h Usage : bootstrap-salt.sh [options] <install-type> <install-type-args> Installation types: - stable (default) - stable [version] (ubuntu specific) - daily (ubuntu specific) - testing (redhat specific) - git Examples: - bootstrap-salt.sh - bootstrap-salt.sh stable - bootstrap-salt.sh stable 2014.7 - bootstrap-salt.sh daily - bootstrap-salt.sh testing - bootstrap-salt.sh git - bootstrap-salt.sh git develop - bootstrap-salt.sh git v0.17.0 - bootstrap-salt.sh git 8c3fadf15ec183e5ce8c63739850d543617e4357 Options: -h Display this message -v Display script version -n No colours. -D Show debug output. -c Temporary configuration directory -g Salt repository URL. (default: git://github.com/saltstack/salt.git) -G Instead of cloning from git://github.com/saltstack/salt.git, clone from https://gith ↲ ub.com/saltstack/salt.git (Usually necessary on systems which have the regular git protocol port blocked, where https usually is not) -k Temporary directory holding the minion keys which will pre-seed the master. -s Sleep time used when waiting for daemons to start, restart and when checking for the services running. Default: 3 -M Also install salt-master -S Also install salt-syndic -N Do not install salt-minion -X Do not start daemons after installation -C Only run the configuration function. This option automatically bypasses any installation. -P Allow pip based installations. On some distributions the required salt packages or its dependencies are not available as a package for that distribution. Using this flag allows the script to use pip as a last resort method. NOTE: This only works for functions which actually implement pip based installations. -F Allow copied files to overwrite existing(config, init.d, etc) -U If set, fully upgrade the system prior to bootstrapping salt -K If set, keep the temporary files in the temporary directories specified with -c and -k. -I If set, allow insecure connections while downloading any files. For example, pass '--no-check-certificate' to 'wget' or '--insecure' to 'curl' -A Pass the salt-master DNS name or IP. This will be stored under ${_SALT_ETC_DIR}/minion.d/99-master-address.conf -i Pass the salt-minion id. This will be stored under ${_SALT_ETC_DIR}/minion_id -L Install the Apache Libcloud package if possible(required for salt-cloud) -p Extra-package to install while installing salt dependencies. One package per -p flag. You're responsible for providing the proper package name. -d Disable check_service functions. Setting this flag disables the 'install_<distro>_check_services' checks. You can also do this by touching /tmp/disable_salt_checks on the target host. Defaults ${BS_FALSE} -H Use the specified http proxy for the installation -Z Enable external software source for newer ZeroMQ(Only available for RHEL/CentOS/Fedo ↲ ra/Ubuntu based distributions) Git Fileserver Backend Walkthrough NOTE: This walkthrough assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough. The gitfs backend allows Salt to serve files from git repositories. It can be enabled by adding git to the fileserver_backend list, and configuring one or more repositories in gitfs_remotes. Branches and tags become Salt fileserver environments. Installing Dependencies Beginning with version 2014.7.0, both pygit2 and Dulwich are supported as alternatives to GitPython. The desired provider can be configured using the gitfs_provider parameter in the master config file. If gitfs_provider is not configured, then Salt will prefer pygit2 if a suitable version is available, followed by GitPython and Dulwich. NOTE: It is recommended to always run the most recent version of any the below dependencies. Certain features of gitfs may not be available without the most recent version of the chosen library. pygit2 The minimum supported version of pygit2 is 0.20.3. Availability for this version of pygit2 is still limited, though the SaltStack team is working to get compatible versions avail‐ able for as many platforms as possible. For the Fedora/EPEL versions which have a new enough version packaged, the following com‐ mand would be used to install pygit2: # yum install python-pygit2 Provided a valid version is packaged for Debian/Ubuntu (which is not currently the case), the package name would be the same, and the following command would be used to install it: # apt-get install python-pygit2 If pygit2 is not packaged for the platform on which the Master is running, the pygit2 web‐ site has installation instructions here. Keep in mind however that following these instructions will install libgit2 and pygit2 without system packages. Additionally, keep in mind that SSH authentication in pygit2 requires libssh2 (not libssh) development libraries to be present before libgit2 is built. On some distros (debian based) pkg-config is also required to link libgit2 with libssh2. WARNING: pygit2 is actively developed and frequently makes non-backwards-compatible API changes, even in minor releases. It is not uncommon for pygit2 upgrades to result in errors in Salt. Please take care when upgrading pygit2, and pay close attention to the changelog, keeping an eye out for API changes. Errors can be reported on the SaltStack issue tracker. GitPython GitPython 0.3.0 or newer is required to use GitPython for gitfs. For RHEL-based Linux dis‐ tros, a compatible version is available in EPEL, and can be easily installed on the master using yum: # yum install GitPython Ubuntu 14.04 LTS and Debian Wheezy (7.x) also have a compatible version packaged: # apt-get install python-git If your master is running an older version (such as Ubuntu 12.04 LTS or Debian Squeeze), then you will need to install GitPython using either pip or easy_install (it is recom‐ mended to use pip). Version 0.3.2.RC1 is now marked as the stable release in PyPI, so it should be a simple matter of running pip install GitPython (or easy_install GitPython) as root. WARNING: Keep in mind that if GitPython has been previously installed on the master using pip (even if it was subsequently uninstalled), then it may still exist in the build cache (typically /tmp/pip-build-root/GitPython) if the cache is not cleared after installa‐ tion. The package in the build cache will override any requirement specifiers, so if you try upgrading to version 0.3.2.RC1 by running pip install 'GitPython==0.3.2.RC1' then it will ignore this and simply install the version from the cache directory. Therefore, it may be necessary to delete the GitPython directory from the build cache in order to ensure that the specified version is installed. Dulwich Dulwich 0.9.4 or newer is required to use Dulwich as backend for gitfs. Dulwich is available in EPEL, and can be easily installed on the master using yum: # yum install python-dulwich For APT-based distros such as Ubuntu and Debian: # apt-get install python-dulwich IMPORTANT: If switching to Dulwich from GitPython/pygit2, or switching from GitPython/pygit2 to Dulwich, it is necessary to clear the gitfs cache to avoid unpredictable behavior. This is probably a good idea whenever switching to a new gitfs_provider, but it is less important when switching between GitPython and pygit2. Beginning in version 2015.5.0, the gitfs cache can be easily cleared using the file‐ server.clear_cache runner. salt-run fileserver.clear_cache backend=git If the Master is running an earlier version, then the cache can be cleared by removing the gitfs and file_lists/gitfs directories (both paths relative to the master cache directory, usually /var/cache/salt/master). rm -rf /var/cache/salt/master{,/file_lists}/gitfs Simple Configuration To use the gitfs backend, only two configuration changes are required on the master: 1. Include git in the fileserver_backend list in the master config file: fileserver_backend: - git 2. Specify one or more git://, https://, file://, or ssh:// URLs in gitfs_remotes to con‐ figure which repositories to cache and search for requested files: gitfs_remotes: - https://github.com/saltstack-formulas/salt-formula.git SSH remotes can also be configured using scp-like syntax: gitfs_remotes: - @github.com:user/repo.git - ssh://@domain.tld/path/to/repo.git Information on how to authenticate to SSH remotes can be found here. NOTE: Dulwich does not recognize ssh:// URLs, git+ssh:// must be used instead. Salt ver‐ sion 2015.5.0 and later will automatically add the git+ to the beginning of these URLs before fetching, but earlier Salt versions will fail to fetch unless the URL is specified using git+ssh://. 3. Restart the master to load the new configuration. NOTE: In a master/minion setup, files from a gitfs remote are cached once by the master, so minions do not need direct access to the git repository. Multiple Remotes The gitfs_remotes option accepts an ordered list of git remotes to cache and search, in listed order, for requested files. A simple scenario illustrates this cascading lookup behavior: If the gitfs_remotes option specifies three remotes: gitfs_remotes: - git://github.com/example/first.git - https://github.com/example/second.git - file:///root/third And each repository contains some files: first.git: top.sls edit/vim.sls edit/vimrc nginx/init.sls second.git: edit/dev_vimrc haproxy/init.sls third: haproxy/haproxy.conf edit/dev_vimrc Salt will attempt to lookup the requested file from each gitfs remote repository in the order in which they are defined in the configuration. The git://github.com/exam‐ ple/first.git remote will be searched first. If the requested file is found, then it is served and no further searching is executed. For example: · A request for the file salt://haproxy/init.sls will be served from the https://github.com/example/second.git git repo. · A request for the file salt://haproxy/haproxy.conf will be served from the file:///root/third repo. NOTE: This example is purposefully contrived to illustrate the behavior of the gitfs backend. This example should not be read as a recommended way to lay out files and git repos. The file:// prefix denotes a git repository in a local directory. However, it will still use the given file:// URL as a remote, rather than copying the git repo to the salt cache. This means that any refs you want accessible must exist as local refs in the specified repo. WARNING: Salt versions prior to 2014.1.0 are not tolerant of changing the order of remotes or modifying the URI of existing remotes. In those versions, when modifying remotes it is a good idea to remove the gitfs cache directory (/var/cache/salt/master/gitfs) before restarting the salt-master service. Per-remote Configuration Parameters New in version 2014.7.0. The following master config parameters are global (that is, they apply to all configured gitfs remotes): · gitfs_base · gitfs_root · gitfs_mountpoint (new in 2014.7.0) · gitfs_user (pygit2 only, new in 2014.7.0) · gitfs_password (pygit2 only, new in 2014.7.0) · gitfs_insecure_auth (pygit2 only, new in 2014.7.0) · gitfs_pubkey (pygit2 only, new in 2014.7.0) · gitfs_privkey (pygit2 only, new in 2014.7.0) · gitfs_passphrase (pygit2 only, new in 2014.7.0) These parameters can now be overridden on a per-remote basis. This allows for a tremendous amount of customization. Here's some example usage: gitfs_provider: pygit2 gitfs_base: develop gitfs_remotes: - https://foo.com/foo.git - https://foo.com/bar.git: - root: salt - mountpoint: salt://foo/bar/baz - base: salt-base - http://foo.com/baz.git: - root: salt/states - user: joe - password: mysupersecretpassword - insecure_auth: True IMPORTANT: There are two important distinctions which should be noted for per-remote configura‐ tion: 1. The URL of a remote which has per-remote configuration must be suffixed with a colon. 2. Per-remote configuration parameters are named like the global versions, with the gitfs_ removed from the beginning. In the example configuration above, the following is true: 1. The first and third gitfs remotes will use the develop branch/tag as the base environ‐ ment, while the second one will use the salt-base branch/tag as the base environment. 2. The first remote will serve all files in the repository. The second remote will only serve files from the salt directory (and its subdirectories), while the third remote will only serve files from the salt/states directory (and its subdirectories). 3. The files from the second remote will be located under salt://foo/bar/baz, while the files from the first and third remotes will be located under the root of the Salt file‐ server namespace (salt://). 4. The third remote overrides the default behavior of not authenticating to insecure (non-HTTPS) remotes. Serving from a Subdirectory The gitfs_root parameter allows files to be served from a subdirectory within the reposi‐ tory. This allows for only part of a repository to be exposed to the Salt fileserver. Assume the below layout: .gitignore README.txt foo/ foo/bar/ foo/bar/one.txt foo/bar/two.txt foo/bar/three.txt foo/baz/ foo/baz/top.sls foo/baz/edit/vim.sls foo/baz/edit/vimrc foo/baz/nginx/init.sls The below configuration would serve only the files under foo/baz, ignoring the other files in the repository: gitfs_remotes: - git://mydomain.com/stuff.git gitfs_root: foo/baz The root can also be configured on a per-remote basis. Mountpoints New in version 2014.7.0. The gitfs_mountpoint parameter will prepend the specified path to the files served from gitfs. This allows an existing repository to be used, rather than needing to reorganize a repository or design it around the layout of the Salt fileserver. Before the addition of this feature, if a file being served up via gitfs was deeply nested within the root directory (for example, salt://webapps/foo/files/foo.conf, it would be necessary to ensure that the file was properly located in the remote repository, and that all of the the parent directories were present (for example, the directories webapps/foo/files/ would need to exist at the root of the repository). The below example would allow for a file foo.conf at the root of the repository to be served up from the Salt fileserver path salt://webapps/foo/files/foo.conf. gitfs_remotes: - https://mydomain.com/stuff.git gitfs_mountpoint: salt://webapps/foo/files Mountpoints can also be configured on a per-remote basis. Using gitfs Alongside Other Backends Sometimes it may make sense to use multiple backends; for instance, if sls files are stored in git but larger files are stored directly on the master. The cascading lookup logic used for multiple remotes is also used with multiple backends. If the fileserver_backend option contains multiple backends: fileserver_backend: - roots - git Then the roots backend (the default backend of files in /srv/salt) will be searched first for the requested file; then, if it is not found on the master, each configured git remote will be searched. Branches, Environments, and Top Files When using the gitfs backend, branches, and tags will be mapped to environments using the branch/tag name as an identifier. There is one exception to this rule: the master branch is implicitly mapped to the base environment. So, for a typical base, qa, dev setup, the following branches could be used: master qa dev top.sls files from different branches will be merged into one at runtime. Since this can lead to overly complex configurations, the recommended setup is to have a separate reposi‐ tory, containing only the top.sls file with just one single master branch. To map a branch other than master as the base environment, use the gitfs_base parameter. gitfs_base: salt-base The base can also be configured on a per-remote basis. Environment Whitelist/Blacklist New in version 2014.7.0. The gitfs_env_whitelist and gitfs_env_blacklist parameters allow for greater control over which branches/tags are exposed as fileserver environments. Exact matches, globs, and reg‐ ular expressions are supported, and are evaluated in that order. If using a regular expression, ^ and $ must be omitted, and the expression must match the entire branch/tag. gitfs_env_whitelist: - base - v1.* - 'mybranch\d+' NOTE: v1.*, in this example, will match as both a glob and a regular expression (though it will have been matched as a glob, since globs are evaluated before regular expres‐ sions). The behavior of the blacklist/whitelist will differ depending on which combination of the two options is used: · If only gitfs_env_whitelist is used, then only branches/tags which match the whitelist will be available as environments · If only gitfs_env_blacklist is used, then the branches/tags which match the blacklist will not be available as environments · If both are used, then the branches/tags which match the whitelist, but do not match the blacklist, will be available as environments. Authentication pygit2 New in version 2014.7.0. Both HTTPS and SSH authentication are supported as of version 0.20.3, which is the earli‐ est version of pygit2 supported by Salt for gitfs. NOTE: The examples below make use of per-remote configuration parameters, a feature new to Salt 2014.7.0. More information on these can be found here. HTTPS For HTTPS repositories which require authentication, the username and password can be pro‐ vided like so: gitfs_remotes: - https://domain.tld/myrepo.git: - user: git - password: mypassword If the repository is served over HTTP instead of HTTPS, then Salt will by default refuse to authenticate to it. This behavior can be overridden by adding an insecure_auth parame‐ ter: gitfs_remotes: - http://domain.tld/insecure_repo.git: - user: git - password: mypassword - insecure_auth: True SSH SSH repositories can be configured using the ssh:// protocol designation, or using scp-like syntax. So, the following two configurations are equivalent: · ssh://@github.com/user/repo.git · @github.com:user/repo.git Both gitfs_pubkey and gitfs_privkey (or their per-remote counterparts) must be configured in order to authenticate to SSH-based repos. If the private key is protected with a passphrase, it can be configured using gitfs_passphrase (or simply passphrase if being configured per-remote). For example: gitfs_remotes: - @github.com:user/repo.git: - pubkey: /root/.ssh/id_rsa.pub - privkey: /root/.ssh/id_rsa - passphrase: myawesomepassphrase Finally, the SSH host key must be added to the known_hosts file. GitPython With GitPython, only passphrase-less SSH public key authentication is supported. The auth parameters (pubkey, privkey, etc.) shown in the pygit2 authentication examples above do not work with GitPython. gitfs_remotes: - ssh://@github.com/example/salt-states.git Since GitPython wraps the git CLI, the private key must be located in ~/.ssh/id_rsa for the user under which the Master is running, and should have permissions of 0600. Also, in the absence of a user in the repo URL, GitPython will (just as SSH does) attempt to login as the current user (in other words, the user under which the Master is running, usually root). If a key needs to be used, then ~/.ssh/config can be configured to use the desired key. Information on how to do this can be found by viewing the manpage for ssh_config. Here's an example entry which can be added to the ~/.ssh/config to use an alternate key for gitfs: Host github.com IdentityFile /root/.ssh/id_rsa_gitfs The Host parameter should be a hostname (or hostname glob) that matches the domain name of the git repository. It is also necessary to add the SSH host key to the known_hosts file. The exception to this would be if strict host key checking is disabled, which can be done by adding Stric‐ tHostKeyChecking no to the entry in ~/.ssh/config Host github.com IdentityFile /root/.ssh/id_rsa_gitfs StrictHostKeyChecking no However, this is generally regarded as insecure, and is not recommended. Adding the SSH Host Key to the known_hosts File To use SSH authentication, it is necessary to have the remote repository's SSH host key in the ~/.ssh/known_hosts file. If the master is also a minion, this can be done using the ssh.set_known_host function: # salt mymaster ssh.set_known_host user=root hostname=github.com mymaster: ---------- new: ---------- enc: ssh-rsa fingerprint: 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 hostname: |1|OiefWWqOD4kwO3BhoIGa0loR5AA=|BIXVtmcTbPER+68HvXmceodDcfI= key: AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkc ↲ cKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ== old: None status: updated If not, then the easiest way to add the key is to su to the user (usually root) under which the salt-master runs and attempt to login to the server via SSH: $ su Password: # ssh github.com The authenticity of host 'github.com (192.30.252.128)' can't be established. RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'github.com,192.30.252.128' (RSA) to the list of known hosts. Permission denied (publickey). It doesn't matter if the login was successful, as answering yes will write the fingerprint to the known_hosts file. Verifying the Fingerprint To verify that the correct fingerprint was added, it is a good idea to look it up. One way to do this is to use nmap: $ nmap github.com --script ssh-hostkey Starting Nmap 5.51 ( http://nmap.org ) at 2014-08-18 17:47 CDT Nmap scan report for github.com (192.30.252.129) Host is up (0.17s latency). Not shown: 996 filtered ports PORT STATE SERVICE 22/tcp open ssh | ssh-hostkey: 1024 ad:1c:08:a4:40:e3:6f:9c:f5:66:26:5d:4b:33:5d:8c (DSA) |_2048 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 (RSA) 80/tcp open http 443/tcp open https 9418/tcp open git Nmap done: 1 IP address (1 host up) scanned in 28.78 seconds Another way is to check one's own known_hosts file, using this one-liner: $ ssh-keygen -l -f /dev/stdin <<<`ssh-keyscan -t rsa github.com 2>/dev/null` | awk '{print ↲ $2}' 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 Refreshing gitfs Upon Push By default, Salt updates the remote fileserver backends every 60 seconds. However, if it is desirable to refresh quicker than that, the Reactor System can be used to signal the master to update the fileserver on each push, provided that the git server is also a Salt minion. There are three steps to this process: 1. On the master, create a file /srv/reactor/update_fileserver.sls, with the following contents: update_fileserver: runner.fileserver.update 2. Add the following reactor configuration to the master config file: reactor: - 'salt/fileserver/gitfs/update': - /srv/reactor/update_fileserver.sls 3. On the git server, add a post-receive hook with the following contents: #!/usr/bin/env sh salt-call event.fire_master update salt/fileserver/gitfs/update The update argument right after event.fire_master in this example can really be anything, as it represents the data being passed in the event, and the passed data is ignored by this reactor. Similarly, the tag name salt/fileserver/gitfs/update can be replaced by anything, so long as the usage is consistent. Using Git as an External Pillar Source The git external pillar (a.k.a. git_pillar) has been rewritten for the 2015.8.0 release. This rewrite brings with it pygit2 support (allowing for access to authenticated reposito‐ ries), as well as more granular support for per-remote configuration. To make use of the new features, changes to the git ext_pillar configuration must be made. The new configuration schema is detailed here. For Salt releases before 2015.8.0, click here for documentation. Why aren't my custom modules/states/etc. syncing to my Minions? In versions 0.16.3 and older, when using the git fileserver backend, certain versions of GitPython may generate errors when fetching, which Salt fails to catch. While not fatal to the fetch process, these interrupt the fileserver update that takes place before custom types are synced, and thus interrupt the sync itself. Try disabling the git fileserver backend in the master config, restarting the master, and attempting the sync again. This issue is worked around in Salt 0.16.4 and newer. The MacOS X (Maverick) Developer Step By Step Guide To Salt Installation This document provides a step-by-step guide to installing a Salt cluster consisting of one master, and one minion running on a local VM hosted on Mac OS X. NOTE: This guide is aimed at developers who wish to run Salt in a virtual machine. The offi‐ cial (Linux) walkthrough can be found here. The 5 Cent Salt Intro Since you're here you've probably already heard about Salt, so you already know Salt lets you configure and run commands on hordes of servers easily. Here's a brief overview of a Salt cluster: · Salt works by having a "master" server sending commands to one or multiple "minion" servers [1]. The master server is the "command center". It is going to be the place where you store your configuration files, aka: "which server is the db, which is the web server, and what libraries and software they should have installed". The minions receive orders from the master. Minions are the servers actually performing work for your busi‐ ness. · Salt has two types of configuration files: 1. the "salt communication channels" or "meta" or "config" configuration files (not official names): one for the master (usually is /etc/salt/master , on the master server), and one for minions (default is /etc/salt/minion or /etc/salt/minion.conf, on the minion servers). Those files are used to determine things like the Salt Master IP, port, Salt folder locations, etc.. If these are configured incorrectly, your minions will probably be unable to receive orders from the master, or the master will not know which software a given minion should install. 2. the "business" or "service" configuration files (once again, not an official name): these are configuration files, ending with ".sls" extension, that describe which soft‐ ware should run on which server, along with particular configuration properties for the software that is being installed. These files should be created in the /srv/salt folder by default, but their location can be changed using ... /etc/salt/master configuration file! NOTE: This tutorial contains a third important configuration file, not to be confused with the previous two: the virtual machine provisioning configuration file. This in itself is not specifically tied to Salt, but it also contains some Salt configuration. More on that in step 3. Also note that all configuration files are YAML files. So indentation matters. [1] Salt also works with "masterless" configuration where a minion is autonomous (in which case salt can be seen as a local configuration tool), or in "multiple master" configuration. See the documentation for more on that. Before Digging In, The Architecture Of The Salt Cluster Salt Master The "Salt master" server is going to be the Mac OS machine, directly. Commands will be run from a terminal app, so Salt will need to be installed on the Mac. This is going to be more convenient for toying around with configuration files. Salt Minion We'll only have one "Salt minion" server. It is going to be running on a Virtual Machine running on the Mac, using VirtualBox. It will run an Ubuntu distribution. Step 1 - Configuring The Salt Master On Your Mac official documentation Because Salt has a lot of dependencies that are not built in Mac OS X, we will use Home‐ brew to install Salt. Homebrew is a package manager for Mac, it's great, use it (for this tutorial at least!). Some people spend a lot of time installing libs by hand to better understand dependencies, and then realize how useful a package manager is once they're configuring a brand new machine and have to do it all over again. It also lets you unin‐ stall things easily. NOTE: Brew is a Ruby program (Ruby is installed by default with your Mac). Brew downloads, compiles, and links software. The linking phase is when compiled software is deployed on your machine. It may conflict with manually installed software, especially in the /usr/local directory. It's ok, remove the manually installed version then refresh the link by typing brew link 'packageName'. Brew has a brew doctor command that can help you troubleshoot. It's a great command, use it often. Brew requires xcode command line tools. When you run brew the first time it asks you to install them if they're not already on your system. Brew installs software in /usr/local/bin (system bins are in /usr/bin). In order to use those bins you need your $PATH to search there first. Brew tells you if your $PATH needs to be fixed. TIP: Use the keyboard shortcut cmd + shift + period in the "open" Mac OS X dialog box to display hidden files and folders, such as .profile. Install Homebrew Install Homebrew here http://brew.sh/ Or just type ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)" Now type the following commands in your terminal (you may want to type brew doctor after each to make sure everything's fine): brew install python brew install swig brew install zmq NOTE: zmq is ZeroMQ. It's a fantastic library used for server to server network communication and is at the core of Salt efficiency. Install Salt You should now have everything ready to launch this command: pip install salt NOTE: There should be no need for sudo pip install salt. Brew installed Python for your user, so you should have all the access. In case you would like to check, type which python to ensure that it's /usr/local/bin/python, and which pip which should be /usr/local/bin/pip. Now type python in a terminal then, import salt. There should be no errors. Now exit the Python terminal using exit(). Create The Master Configuration If the default /etc/salt/master configuration file was not created, copy-paste it from here: http://docs.saltstack.com/ref/configuration/examples.html#configuration-examples-master NOTE: /etc/salt/master is a file, not a folder. Salt Master configuration changes. The Salt master needs a few customization to be able to run on Mac OS X: sudo launchctl limit maxfiles 4096 8192 In the /etc/salt/master file, change max_open_files to 8192 (or just add the line: max_open_files: 8192 (no quote) if it doesn't already exists). You should now be able to launch the Salt master: sudo salt-master --log-level=all There should be no errors when running the above command. NOTE: This command is supposed to be a daemon, but for toying around, we'll keep it running on a terminal to monitor the activity. Now that the master is set, let's configure a minion on a VM. Step 2 - Configuring The Minion VM The Salt minion is going to run on a Virtual Machine. There are a lot of software options that let you run virtual machines on a mac, But for this tutorial we're going to use Vir‐ tualBox. In addition to virtualBox, we will use Vagrant, which allows you to create the base VM configuration. Vagrant lets you build ready to use VM images, starting from an OS image and customizing it using "provisioners". In our case, we'll use it to: · Download the base Ubuntu image · Install salt on that Ubuntu image (Salt is going to be the "provisioner" for the VM). · Launch the VM · SSH into the VM to debug · Stop the VM once you're done. Install VirtualBox Go get it here: https://www.virtualBox.org/wiki/Downloads (click on VirtualBox for OS X hosts => x86/amd64) Install Vagrant Go get it here: http://downloads.vagrantup.com/ and choose the latest version (1.3.5 at time of writing), then the .dmg file. Double-click to install it. Make sure the vagrant command is found when run in the terminal. Type vagrant. It should display a list of com‐ mands. Create The Minion VM Folder Create a folder in which you will store your minion's VM. In this tutorial, it's going to be a minion folder in the $home directory. cd $home mkdir minion Initialize Vagrant From the minion folder, type vagrant init This command creates a default Vagrantfile configuration file. This configuration file will be used to pass configuration parameters to the Salt provisioner in Step 3. Import Precise64 Ubuntu Box vagrant box add precise64 http://files.vagrantup.com/precise64.box NOTE: This box is added at the global Vagrant level. You only need to do it once as each VM will use this same file. Modify the Vagrantfile Modify ./minion/Vagrantfile to use th precise64 box. Change the config.vm.box line to: config.vm.box = "precise64" Uncomment the line creating a host-only IP. This is the ip of your minion (you can change it to something else if that IP is already in use): config.vm.network :private_network, ip: "192.168.33.10" At this point you should have a VM that can run, although there won't be much in it. Let's check that. Checking The VM From the $home/minion folder type: vagrant up A log showing the VM booting should be present. Once it's done you'll be back to the ter‐ minal: ping 192.168.33.10 The VM should respond to your ping request. Now log into the VM in ssh using Vagrant again: vagrant ssh You should see the shell prompt change to something similar to vagrant@precise64:~$ mean‐ ing you're inside the VM. From there, enter the following: ping 10.0.2.2 NOTE: That ip is the ip of your VM host (the Mac OS X OS). The number is a VirtualBox default and is displayed in the log after the Vagrant ssh command. We'll use that IP to tell the minion where the Salt master is. Once you're done, end the ssh session by typing exit. It's now time to connect the VM to the salt master Step 3 - Connecting Master and Minion Creating The Minion Configuration File Create the /etc/salt/minion file. In that file, put the following lines, giving the ID for this minion, and the IP of the master: master: 10.0.2.2 id: 'minion1' file_client: remote Minions authenticate with the master using keys. Keys are generated automatically if you don't provide one and can accept them later on. However, this requires accepting the min‐ ion key every time the minion is destroyed or created (which could be quite often). A bet‐ ter way is to create those keys in advance, feed them to the minion, and authorize them once. Preseed minion keys From the minion folder on your Mac run: sudo salt-key --gen-keys=minion1 This should create two files: minion1.pem, and minion1.pub. Since those files have been created using sudo, but will be used by vagrant, you need to change ownership: sudo chown youruser:yourgroup minion1.pem sudo chown youruser:yourgroup minion1.pub Then copy the .pub file into the list of accepted minions: sudo cp minion1.pub /etc/salt/pki/master/minions/minion1 Modify Vagrantfile to Use Salt Provisioner Let's now modify the Vagrantfile used to provision the Salt VM. Add the following section in the Vagrantfile (note: it should be at the same indentation level as the other proper‐ ties): # salt-vagrant config config.vm.provision :salt do |salt| salt.run_highstate = true salt.minion_config = "/etc/salt/minion" salt.minion_key = "./minion1.pem" salt.minion_pub = "./minion1.pub" end Now destroy the vm and recreate it from the /minion folder: vagrant destroy vagrant up If everything is fine you should see the following message: "Bootstrapping Salt... (this may take a while) Salt successfully configured and installed!" Checking Master-Minion Communication To make sure the master and minion are talking to each other, enter the following: sudo salt '*' test.ping You should see your minion answering the ping. It's now time to do some configuration. Step 4 - Configure Services to Install On the Minion In this step we'll use the Salt master to instruct our minion to install Nginx. Checking the system's original state First, make sure that an HTTP server is not installed on our minion. When opening a browser directed at http://192.168.33.10/ You should get an error saying the site cannot be reached. Initialize the top.sls file System configuration is done in the /srv/salt/top.sls file (and subfiles/folders), and then applied by running the state.highstate command to have the Salt master give orders so minions will update their instructions and run the associated commands. First Create an empty file on your Salt master (Mac OS X machine): touch /srv/salt/top.sls When the file is empty, or if no configuration is found for our minion an error is reported: sudo salt 'minion1' state.highstate Should return an error stating: "No Top file or external nodes data matches found". Create The Nginx Configuration Now is finally the time to enter the real meat of our server's configuration. For this tutorial our minion will be treated as a web server that needs to have Nginx installed. Insert the following lines into the /srv/salt/top.sls file (which should current be empty). base: 'minion1': - bin.nginx Now create a /srv/salt/bin/nginx.sls file containing the following: nginx: pkg.installed: - name: nginx service.running: - enable: True - reload: True Check Minion State Finally run the state.highstate command again: sudo salt 'minion1' state.highstate You should see a log showing that the Nginx package has been installed and the service configured. To prove it, open your browser and navigate to http://192.168.33.10/, you should see the standard Nginx welcome page. Congratulations! Where To Go From Here A full description of configuration management within Salt (sls files among other things) is available here: http://docs.saltstack.com/en/latest/index.html#configuration-management Salt's Test Suite: An Introduction NOTE: This tutorial makes a couple of assumptions. The first assumption is that you have a basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough. The second assumption is that your Salt development environment is already configured and that you have a basic understanding of contributing to the Salt codebase. If you're unfamiliar with either of these topics, please refer to the Installing Salt for Devel‐ opment and the Contributing pages, respectively. Salt comes with a powerful integration and unit test suite. The test suite allows for the fully automated run of integration and/or unit tests from a single interface. Salt's test suite is located under the tests directory in the root of Salt's code base and is divided into two main types of tests: unit tests and integration tests. The unit and integration sub test suites are located in the tests directory, which is where the major‐ ity of Salt's test cases are housed. Getting Set Up For Tests There are a couple of requirements, in addition to Salt's requirements, that need to be installed in order to run Salt's test suite. You can install these additional requirements using the files located in the salt/requirements directory, depending on your relevant version of Python: pip install -r requirements/dev_python26.txt pip install -r requirements/dev_python27.txt Test Directory Structure As noted in the introduction to this tutorial, Salt's test suite is located in the tests directory in the root of Salt's code base. From there, the tests are divided into two groups integration and unit. Within each of these directories, the directory structure roughly mirrors the directory structure of Salt's own codebase. For example, the files inside tests/integration/modules contains tests for the files located within salt/modules. NOTE: tests/integration and tests/unit are the only directories discussed in this tutorial. With the exception of the tests/runtests.py file, which is used below in the Running the Test Suite section, the other directories and files located in tests are outside the scope of this tutorial. Integration vs. Unit Given that Salt's test suite contains two powerful, though very different, testing approaches, when should you write integration tests and when should you write unit tests? Integration tests use Salt masters, minions, and a syndic to test salt functionality directly and focus on testing the interaction of these components. Salt's integration test runner includes functionality to run Salt execution modules, runners, states, shell com‐ mands, salt-ssh commands, salt-api commands, and more. This provides a tremendous ability to use Salt to test itself and makes writing such tests a breeze. Integration tests are the preferred method of testing Salt functionality when possible. Unit tests do not spin up any Salt daemons, but instead find their value in testing singu‐ lar implementations of individual functions. Instead of testing against specific interac‐ tions, unit tests should be used to test a function's logic. Unit tests should be used to test a function's exit point(s) such as any return or raises statements. Unit tests are also useful in cases where writing an integration test might not be possi‐ ble. While the integration test suite is extremely powerful, unfortunately at this time, it does not cover all functional areas of Salt's ecosystem. For example, at the time of this writing, there is not a way to write integration tests for Proxy Minions. Since the test runner will need to be adjusted to account for Proxy Minion processes, unit tests can still provide some testing support in the interim by testing the logic contained inside Proxy Minion functions. Running the Test Suite Once all of the `requirements<Getting Set Up For Tests>`_ are installed, the runtests.py file in the salt/tests directory is used to instantiate Salt's test suite: python tests/runtests.py [OPTIONS] The command above, if executed without any options, will run the entire suite of integra‐ tion and unit tests. Some tests require certain flags to run, such as destructive tests. If these flags are not included, then the test suite will only perform the tests that don't require special attention. At the end of the test run, you will see a summary output of the tests that passed, failed, or were skipped. The test runner also includes a --help option that lists all of the various command line options: python tests/runtests.py --help You can also call the test runner as an executable: ./tests/runtests.py --help Running Integration Tests Salt's set of integration tests use Salt to test itself. The integration portion of the test suite includes some built-in Salt daemons that will spin up in preparation of the test run. This list of Salt daemon processes includes: · 2 Salt Masters · 2 Salt Minions · 1 Salt Syndic These various daemons are used to execute Salt commands and functionality within the test suite, allowing you to write tests to assert against expected or unexpected behaviors. A simple example of a test utilizing a typical master/minion execution module command is the test for the test_ping function in the tests/integration/modules/test.py file: def test_ping(self): ''' test.ping ''' self.assertTrue(self.run_function('test.ping')) The test above is a very simple example where the test.ping function is executed by Salt's test suite runner and is asserting that the minion returned with a True response. Test Selection Options If you look in the output of the --help command of the test runner, you will see a section called Tests Selection Options. The options under this section contain various subsections of the integration test suite such as --modules, --ssh, or --states. By selecting any one of these options, the test daemons will spin up and the integration tests in the named subsection will run. ./tests/runtests.py --modules NOTE: The testing subsections listed in the Tests Selection Options of the --help output only apply to the integration tests. They do not run unit tests. Running Unit Tests While ./tests/runtests.py executes the entire test suite (barring any tests requiring spe‐ cial flags), the --unit flag can be used to run only Salt's unit tests. Salt's unit tests include the tests located in the tests/unit directory. The unit tests do not spin up any Salt testing daemons as the integration tests do and execute very quickly compared to the integration tests. ./tests/runtests.py --unit Running Specific Tests There are times when a specific test file, test class, or even a single, individual test need to be executed, such as when writing new tests. In these situations, the --name option should be used. For running a single test file, such as the pillar module test file in the integration test directory, you must provide the file path using . instead of / as separators and no file extension: ./tests/runtests.py --name=integration.modules.pillar ./tests/runtests.py -n integration.modules.pillar Some test files contain only one test class while other test files contain multiple test classes. To run a specific test class within the file, append the name of the test class to the end of the file path: ./tests/runtests.py --name=integration.modules.pillar.PillarModuleTest ./tests/runtests.py -n integration.modules.pillar.PillarModuleTest To run a single test within a file, append both the name of the test class the individual test belongs to, as well as the name of the test itself: ./tests/runtests.py --name=integration.modules.pillar.PillarModuleTest.test_data ./tests/runtests.py -n integration.modules.pillar.PillarModuleTest.test_data The --name and -n options can be used for unit tests as well as integration tests. The following command is an example of how to execute a single test found in the tests/unit/modules/cp_test.py file: ./tests/runtests.py -n unit.modules.cp_test.CpTestCase.test_get_template_success Writing Tests for Salt Once you're comfortable running tests, you can now start writing them! Be sure to review the Integration vs. Unit section of this tutorial to determine what type of test makes the most sense for the code you're testing. NOTE: There are many decorators, naming conventions, and code specifications required for Salt test files. We will not be covering all of the these specifics in this tutorial. Please refer to the testing documentation links listed below in the Additional Testing Documentation section to learn more about these requirements. In the following sections, the test examples assume the "new" test is added to a test file that is already present and regularly running in the test suite and is written with the correct requirements. Writing Integration Tests Since integration tests validate against a running environment, as explained in the Running Integration Tests section of this tutorial, integration tests are very easy to write and are generally the preferred method of writing Salt tests. The following integration test is an example taken from the test.py file in the tests/integration/modules directory. This test uses the run_function method to test the functionality of a traditional execution module command. The run_function method uses the integration test daemons to execute a module.function command as you would with Salt. The minion runs the function and returns. The test also uses Python's Assert Functions to test that the minion's return is expected. def test_ping(self): ''' test.ping ''' self.assertTrue(self.run_function('test.ping')) Args can be passed in to the run_function method as well: def test_echo(self): ''' test.echo ''' self.assertEqual(self.run_function('test.echo', ['text']), 'text') The next example is taken from the tests/integration/modules/aliases.py file and demon‐ strates how to pass kwargs to the run_function call. Also note that this test uses another salt function to ensure the correct data is present (via the aliases.set_target call) before attempting to assert what the aliases.get_target call should return. def test_set_target(self): ''' aliases.set_target and aliases.get_target ''' set_ret = self.run_function( 'aliases.set_target', alias='fred', target='bob') self.assertTrue(set_ret) tgt_ret = self.run_function( 'aliases.get_target', alias='fred') self.assertEqual(tgt_ret, 'bob') Using multiple Salt commands in this manor provides two useful benefits. The first is that it provides some additional coverage for the aliases.set_target function. The second ben‐ efit is the call to aliases.get_target is not dependent on the presence of any aliases set outside of this test. Tests should not be dependent on the previous execution, success, or failure of other tests. They should be isolated from other tests as much as possible. While it might be tempting to build out a test file where tests depend on one another before running, this should be avoided. SaltStack recommends that each test should test a single functionality and not rely on other tests. Therefore, when possible, individual tests should also be broken up into singular pieces. These are not hard-and-fast rules, but serve more as recommendations to keep the test suite simple. This helps with debug‐ ging code and related tests when failures occur and problems are exposed. There may be instances where large tests use many asserts to set up a use case that protects against potential regressions. NOTE: The examples above all use the run_function option to test execution module functions in a traditional master/minion environment. To see examples of how to test other common Salt components such as runners, salt-api, and more, please refer to the Integration Test Class Examples documentation. Destructive vs Non-destructive Tests Since Salt is used to change the settings and behavior of systems, often, the best approach to run tests is to make actual changes to an underlying system. This is where the concept of destructive integration tests comes into play. Tests can be written to alter the system they are running on. This capability is what fills in the gap needed to properly test aspects of system management like package installation. To write a destructive test, import and use the destructiveTest decorator for the test method: import integration from salttesting.helpers import destructiveTest class PkgTest(integration.ModuleCase): @destructiveTest def test_pkg_install(self): ret = self.run_function('pkg.install', name='finch') self.assertSaltTrueReturn(ret) ret = self.run_function('pkg.purge', name='finch') self.assertSaltTrueReturn(ret) Writing Unit Tests As explained in the Integration vs. Unit section above, unit tests should be written to test the logic of a function. This includes focusing on testing return and raises state‐ ments. Substantial effort should be made to mock external resources that are used in the code being tested. External resources that should be mocked include, but are not limited to, APIs, function calls, external data either globally available or passed in through function arguments, file data, etc. This practice helps to isolate unit tests to test Salt logic. One handy way to think about writing unit tests is to "block all of the exits". More information about how to properly mock external resources can be found in Salt's Unit Test documenta‐ tion. Salt's unit tests utilize Python's mock class as well as MagicMock. The @patch decorator is also heavily used when "blocking all the exits". A simple example of a unit test currently in use in Salt is the test_get_file_not_found test in the tests/unit/modules/cp_test.py file. This test uses the @patch decorator and MagicMock to mock the return of the call to Salt's cp.hash_file execution module function. This ensures that we're testing the cp.get_file function directly, instead of inadver‐ tently testing the call to cp.hash_file, which is used in cp.get_file. @patch('salt.modules.cp.hash_file', MagicMock(return_value=False)) def test_get_file_not_found(self): ''' Test if get_file can't find the file. ''' path = 'salt://saltines' dest = '/srv/salt/cheese' ret = '' self.assertEqual(cp.get_file(path, dest), ret) Note that Salt's cp module is imported at the top of the file, along with all of the other necessary testing imports. The get_file function is then called directed in the testing function, instead of using the run_fucntion method as the integration test examples do above. The call to cp.get_file returns an empty string when a hash_file isn't found. Therefore, the example above is a good illustration of a unit test "blocking the exits" via the @patch decorator, as well as testing logic via asserting against the return statement in the if clause. There are more examples of writing unit tests of varying complexities available in the following docs: · `Simple Unit Test Example<simple-unit-example>`_ · `Complete Unit Test Example<complete-unit-example>`_ · `Complex Unit Test Example<complex-unit-example>`_ NOTE: Considerable care should be made to ensure that you're testing something useful in your test functions. It is very easy to fall into a situation where you have mocked so much of the original function that the test results in only asserting against the data you have provided. This results in a poor and fragile unit test. Automated Test Runs SaltStack maintains a Jenkins server which can be viewed at http://jenkins.saltstack.com. The tests executed from this Jenkins server create fresh virtual machines for each test run, then execute the destructive tests on the new, clean virtual machine. This allows for the execution of tests across supported platforms. Additional Testing Documentation In addition to this tutorial, there are some other helpful resources and documentation that go into more depth on Salt's test runner, writing tests for Salt code, and general Python testing documentation. Please see the follow references for more information: · Salt's Test Suite Documentation · Integration Tests · Unit Tests · MagicMock · Python Unittest · Python's Assert Functions HTTP Modules This tutorial demonstrates using the various HTTP modules available in Salt. These mod‐ ules wrap the Python tornado, urllib2, and requests libraries, extending them in a manner that is more consistent with Salt workflows. The salt.utils.http Library This library forms the core of the HTTP modules. Since it is designed to be used from the minion as an execution module, in addition to the master as a runner, it was abstracted into this multi-use library. This library can also be imported by 3rd-party programs wish‐ ing to take advantage of its extended functionality. Core functionality of the execution, state, and runner modules is derived from this library, so common usages between them are described here. Documentation specific to each module is described below. This library can be imported with: import salt.utils.http Configuring Libraries This library can make use of either tornado, which is required by Salt, urllib2, which ships with Python, or requests, which can be installed separately. By default, tornado will be used. In order to switch to urllib2, set the following variable: backend: urllib2 In order to switch to requests, set the following variable: backend: requests This can be set in the master or minion configuration file, or passed as an option directly to any http.query() functions. salt.utils.http.query() This function forms a basic query, but with some add-ons not present in the tornado, url‐ lib2, and requests libraries. Not all functionality currently available in these libraries has been added, but can be in future iterations. A basic query can be performed by calling this function with no more than a single URL: salt.utils.http.query('http://example.com') By default the query will be performed with a GET method. The method can be overridden with the method argument: salt.utils.http.query('http://example.com/delete/url', 'DELETE') When using the POST method (and others, such as PUT), extra data is usually sent as well. This data can be sent directly, in whatever format is required by the remote server (XML, JSON, plain text, etc). salt.utils.http.query( 'http://example.com/delete/url', method='POST', data=json.loads(mydict) ) Bear in mind that this data must be sent pre-formatted; this function will not format it for you. However, a templated file stored on the local system may be passed through, along with variables to populate it with. To pass through only the file (untemplated): salt.utils.http.query( 'http://example.com/post/url', method='POST', data_file='/srv/salt/somefile.xml' ) To pass through a file that contains jinja + yaml templating (the default): salt.utils.http.query( 'http://example.com/post/url', method='POST', data_file='/srv/salt/somefile.jinja', data_render=True, template_data={'key1': 'value1', 'key2': 'value2'} ) To pass through a file that contains mako templating: salt.utils.http.query( 'http://example.com/post/url', method='POST', data_file='/srv/salt/somefile.mako', data_render=True, data_renderer='mako', template_data={'key1': 'value1', 'key2': 'value2'} ) Because this function uses Salt's own rendering system, any Salt renderer can be used. Because Salt's renderer requires __opts__ to be set, an opts dictionary should be passed in. If it is not, then the default __opts__ values for the node type (master or minion) will be used. Because this library is intended primarily for use by minions, the default node type is minion. However, this can be changed to master if necessary. salt.utils.http.query( 'http://example.com/post/url', method='POST', data_file='/srv/salt/somefile.jinja', data_render=True, template_data={'key1': 'value1', 'key2': 'value2'}, opts=__opts__ ) salt.utils.http.query( 'http://example.com/post/url', method='POST', data_file='/srv/salt/somefile.jinja', data_render=True, template_data={'key1': 'value1', 'key2': 'value2'}, node='master' ) Headers may also be passed through, either as a header_list, a header_dict, or as a header_file. As with the data_file, the header_file may also be templated. Take note that because HTTP headers are normally syntactically-correct YAML, they will automatically be imported as an a Python dict. salt.utils.http.query( 'http://example.com/delete/url', method='POST', header_file='/srv/salt/headers.jinja', header_render=True, header_renderer='jinja', template_data={'key1': 'value1', 'key2': 'value2'} ) Because much of the data that would be templated between headers and data may be the same, the template_data is the same for both. Correcting possible variable name collisions is up to the user. The query() function supports basic HTTP authentication. A username and password may be passed in as username and password, respectively. salt.utils.http.query( 'http://example.com', username='larry', password=`5700g3543v4r`, ) Cookies are also supported, using Python's built-in cookielib. However, they are turned off by default. To turn cookies on, set cookies to True. salt.utils.http.query( 'http://example.com', cookies=True ) By default cookies are stored in Salt's cache directory, normally /var/cache/salt, as a file called cookies.txt. However, this location may be changed with the cookie_jar argu‐ ment: salt.utils.http.query( 'http://example.com', cookies=True, cookie_jar='/path/to/cookie_jar.txt' ) By default, the format of the cookie jar is LWP (aka, lib-www-perl). This default was cho‐ sen because it is a human-readable text file. If desired, the format of the cookie jar can be set to Mozilla: salt.utils.http.query( 'http://example.com', cookies=True, cookie_jar='/path/to/cookie_jar.txt', cookie_format='mozilla' ) Because Salt commands are normally one-off commands that are piped together, this library cannot normally behave as a normal browser, with session cookies that persist across mul‐ tiple HTTP requests. However, the session can be persisted in a separate cookie jar. The default filename for this file, inside Salt's cache directory, is cookies.session.p. This can also be changed. salt.utils.http.query( 'http://example.com', persist_session=True, session_cookie_jar='/path/to/jar.p' ) The format of this file is msgpack, which is consistent with much of the rest of Salt's internal structure. Historically, the extension for this file is .p. There are no current plans to make this configurable. Return Data NOTE: Return data encoding If decode is set to True, query() will attempt to decode the return data. decode_type defaults to auto. Set it to a specific encoding, xml, for example, to override autode‐ tection. Because Salt's http library was designed to be used with REST interfaces, query() will attempt to decode the data received from the remote server when decode is set to True. First it will check the Content-type header to try and find references to XML. If it does not find any, it will look for references to JSON. If it does not find any, it will fall back to plain text, which will not be decoded. JSON data is translated into a dict using Python's built-in json library. XML is trans‐ lated using salt.utils.xml_util, which will use Python's built-in XML libraries to attempt to convert the XML into a dict. In order to force either JSON or XML decoding, the decode_type may be set: salt.utils.http.query( 'http://example.com', decode_type='xml' ) Once translated, the return dict from query() will include a dict called dict. If the data is not to be translated using one of these methods, decoding may be turned off. salt.utils.http.query( 'http://example.com', decode=False ) If decoding is turned on, and references to JSON or XML cannot be found, then this module will default to plain text, and return the undecoded data as text (even if text is set to False; see below). The query() function can return the HTTP status code, headers, and/or text as required. However, each must individually be turned on. salt.utils.http.query( 'http://example.com', status=True, headers=True, text=True ) The return from these will be found in the return dict as status, headers and text, respectively. Writing Return Data to Files It is possible to write either the return data or headers to files, as soon as the response is received from the server, but specifying file locations via the text_out or headers_out arguments. text and headers do not need to be returned to the user in order to do this. salt.utils.http.query( 'http://example.com', text=False, headers=False, text_out='/path/to/url_download.txt', headers_out='/path/to/headers_download.txt', ) SSL Verification By default, this function will verify SSL certificates. However, for testing or debugging purposes, SSL verification can be turned off. salt.utils.http.query( 'https://example.com', verify_ssl=False, ) CA Bundles The requests library has its own method of detecting which CA (certificate authority) bun‐ dle file to use. Usually this is implemented by the packager for the specific operating system distribution that you are using. However, urllib2 requires a little more work under the hood. By default, Salt will try to auto-detect the location of this file. However, if it is not in an expected location, or a different path needs to be specified, it may be done so using the ca_bundle variable. salt.utils.http.query( 'https://example.com', ca_bundle='/path/to/ca_bundle.pem', ) Updating CA Bundles The update_ca_bundle() function can be used to update the bundle file at a specified loca‐ tion. If the target location is not specified, then it will attempt to auto-detect the location of the bundle file. If the URL to download the bundle from does not exist, a bun‐ dle will be downloaded from the cURL website. CAUTION: The target and the source should always be specified! Failure to specify the tar‐ get may result in the file being written to the wrong location on the local system. Fail‐ ure to specify the source may cause the upstream URL to receive excess unnecessary traf‐ fic, and may cause a file to be download which is hazardous or does not meet the needs of the user. salt.utils.http.update_ca_bundle( target='/path/to/ca-bundle.crt', source='https://example.com/path/to/ca-bundle.crt', opts=__opts__, ) The opts parameter should also always be specified. If it is, then the target and the source may be specified in the relevant configuration file (master or minion) as ca_bundle and ca_bundle_url, respectively. ca_bundle: /path/to/ca-bundle.crt ca_bundle_url: https://example.com/path/to/ca-bundle.crt If Salt is unable to auto-detect the location of the CA bundle, it will raise an error. The update_ca_bundle() function can also be passed a string or a list of strings which represent files on the local system, which should be appended (in the specified order) to the end of the CA bundle file. This is useful in environments where private certs need to be made available, and are not otherwise reasonable to add to the bundle file. salt.utils.http.update_ca_bundle( opts=__opts__, merge_files=[ '/etc/ssl/private_cert_1.pem', '/etc/ssl/private_cert_2.pem', '/etc/ssl/private_cert_3.pem', ] ) Test Mode This function may be run in test mode. This mode will perform all work up until the actual HTTP request. By default, instead of performing the request, an empty dict will be returned. Using this function with TRACE logging turned on will reveal the contents of the headers and POST data to be sent. Rather than returning an empty dict, an alternate test_url may be passed in. If this is detected, then test mode will replace the url with the test_url, set test to True in the return data, and perform the rest of the requested operations as usual. This allows a cus‐ tom, non-destructive URL to be used for testing when necessary. Execution Module The http execution module is a very thin wrapper around the salt.utils.http library. The opts can be passed through as well, but if they are not specified, the minion defaults will be used as necessary. Because passing complete data structures from the command line can be tricky at best and dangerous (in terms of execution injection attacks) at worse, the data_file, and header_file are likely to see more use here. All methods for the library are available in the execution module, as kwargs. salt myminion http.query http://example.com/restapi method=POST \ username='larry' password='5700g3543v4r' headers=True text=True \ status=True decode_type=xml data_render=True \ header_file=/tmp/headers.txt data_file=/tmp/data.txt \ header_render=True cookies=True persist_session=True Runner Module Like the execution module, the http runner module is a very thin wrapper around the salt.utils.http library. The only significant difference is that because runners execute on the master instead of a minion, a target is not required, and default opts will be derived from the master config, rather than the minion config. All methods for the library are available in the runner module, as kwargs. salt-run http.query http://example.com/restapi method=POST \ username='larry' password='5700g3543v4r' headers=True text=True \ status=True decode_type=xml data_render=True \ header_file=/tmp/headers.txt data_file=/tmp/data.txt \ header_render=True cookies=True persist_session=True State Module The state module is a wrapper around the runner module, which applies stateful logic to a query. All kwargs as listed above are specified as usual in state files, but two more kwargs are available to apply stateful logic. A required parameter is match, which speci‐ fies a pattern to look for in the return text. By default, this will perform a string com‐ parison of looking for the value of match in the return text. In Python terms this looks like: if match in html_text: return True If more complex pattern matching is required, a regular expression can be used by specify‐ ing a match_type. By default this is set to string, but it can be manually set to pcre instead. Please note that despite the name, this will use Python's re.search() rather than re.match(). Therefore, the following states are valid: http://example.com/restapi: http.query: - match: 'SUCCESS' - username: 'larry' - password: '5700g3543v4r' - data_render: True - header_file: /tmp/headers.txt - data_file: /tmp/data.txt - header_render: True - cookies: True - persist_session: True http://example.com/restapi: http.query: - match_type: pcre - match: '(?i)succe[ss|ed]' - username: 'larry' - password: '5700g3543v4r' - data_render: True - header_file: /tmp/headers.txt - data_file: /tmp/data.txt - header_render: True - cookies: True - persist_session: True In addition to, or instead of a match pattern, the status code for a URL can be checked. This is done using the status argument: http://example.com/: http.query: - status: '200' If both are specified, both will be checked, but if only one is True and the other is False, then False will be returned. In this case, the comments in the return data will contain information for troubleshooting. Because this is a monitoring state, it will return extra data to code that expects it. This data will always include text and status. Optionally, headers and dict may also be requested by setting the headers and decode arguments to True, respectively. LXC Management with Salt NOTE: This walkthrough assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough. Dependencies Manipulation of LXC containers in Salt requires the minion to have an LXC version of at least 1.0 (an alpha or beta release of LXC 1.0 is acceptable). The following distribu‐ tions are known to have new enough versions of LXC packaged: · RHEL/CentOS 6 and later (via EPEL) · Fedora (All non-EOL releases) · Debian 8.0 (Jessie) · Ubuntu 14.04 LTS and later (LXC templates are packaged separately as lxc-templates, it is recommended to also install this package) · openSUSE 13.2 and later Profiles Profiles allow for a sort of shorthand for commonly-used configurations to be defined in the minion config file, grains, pillar, or the master config file. The profile is retrieved by Salt using the config.get function, which looks in those locations, in that order. This allows for profiles to be defined centrally in the master config file, with several options for overriding them (if necessary) on groups of minions or individual min‐ ions. There are two types of profiles: · One for defining the parameters used in container creation/clone. · One for defining the container's network interface(s) settings. Container Profiles LXC container profiles are defined defined underneath the lxc.container_profile config option: lxc.container_profile: centos: template: centos backing: lvm vgname: vg1 lvname: lxclv size: 10G centos_big: template: centos backing: lvm vgname: vg1 lvname: lxclv size: 20G Profiles are retrieved using the config.get function, with the recurse merge strategy. This means that a profile can be defined at a lower level (for example, the master config file) and then parts of it can be overridden at a higher level (for example, in pillar data). Consider the following container profile data: In the Master config file: lxc.container_profile: centos: template: centos backing: lvm vgname: vg1 lvname: lxclv size: 10G In the Pillar data lxc.container_profile: centos: size: 20G Any minion with the above Pillar data would have the size parameter in the centos profile overridden to 20G, while those minions without the above Pillar data would have the 10G size value. This is another way of achieving the same result as the centos_big profile above, without having to define another whole profile that differs in just one value. NOTE: In the 2014.7.x release cycle and earlier, container profiles are defined under lxc.profile. This parameter will still work in version 2015.5.0, but is deprecated and will be removed in a future release. Please note however that the profile merging fea‐ ture described above will only work with profiles defined under lxc.container_profile, and only in versions 2015.5.0 and later. Additionally, in version 2015.5.0 container profiles have been expanded to support passing template-specific CLI options to lxc.create. Below is a table describing the parameters which can be configured in container profiles: ┌──────────┬────────────────────┬──────────────────────┐ │Parameter │ 2015.5.0 and Newer │ 2014.7.x and Earlier │ ├──────────┼────────────────────┼──────────────────────┤ │template1 │ Yes │ Yes │ ├──────────┼────────────────────┼──────────────────────┤ │options1 │ Yes │ No │ ├──────────┼────────────────────┼──────────────────────┤ │image1 │ Yes │ Yes │ ├──────────┼────────────────────┼──────────────────────┤ │backing │ Yes │ Yes │ ├──────────┼────────────────────┼──────────────────────┤ │snapshot2 │ Yes │ Yes │ ├──────────┼────────────────────┼──────────────────────┤ │lvname1 │ Yes │ Yes │ └──────────┴────────────────────┴──────────────────────┘ │fstype1 │ Yes │ Yes │ ├──────────┼────────────────────┼──────────────────────┤ │size │ Yes │ Yes │ └──────────┴────────────────────┴──────────────────────┘ 1. Parameter is only supported for container creation, and will be ignored if the profile is used when cloning a container. 2. Parameter is only supported for container cloning, and will be ignored if the profile is used when not cloning a container. Network Profiles LXC network profiles are defined defined underneath the lxc.network_profile config option. By default, the module uses a DHCP based configuration and try to guess a bridge to get connectivity. WARNING: on pre 2015.5.2, you need to specify explicitly the network bridge lxc.network_profile: centos: eth0: link: br0 type: veth flags: up ubuntu: eth0: link: lxcbr0 type: veth flags: up As with container profiles, network profiles are retrieved using the config.get function, with the recurse merge strategy. Consider the following network profile data: In the Master config file: lxc.network_profile: centos: eth0: link: br0 type: veth flags: up In the Pillar data lxc.network_profile: centos: eth0: link: lxcbr0 Any minion with the above Pillar data would use the lxcbr0 interface as the bridge inter‐ face for any container configured using the centos network profile, while those minions without the above Pillar data would use the br0 interface for the same. NOTE: In the 2014.7.x release cycle and earlier, network profiles are defined under lxc.nic. This parameter will still work in version 2015.5.0, but is deprecated and will be removed in a future release. Please note however that the profile merging feature described above will only work with profiles defined under lxc.network_profile, and only in versions 2015.5.0 and later. The following are parameters which can be configured in network profiles. These will directly correspond to a parameter in an LXC configuration file (see man 5 lxc.con‐ tainer.conf). · type - Corresponds to lxc.network.type · link - Corresponds to lxc.network.link · flags - Corresponds to lxc.network.flags Interface-specific options (MAC address, IPv4/IPv6, etc.) must be passed on a con‐ tainer-by-container basis, for instance using the nic_opts argument to lxc.create: salt myminion lxc.create container1 profile=centos network_profile=centos nic_opts='{eth0: ↲ {ipv4: 10.0.0.20/24, gateway: 10.0.0.1}}' WARNING: The ipv4, ipv6, gateway, and link (bridge) settings in network profiles / nic_opts will only work if the container doesn't redefine the network configuration (for example in /etc/sysconfig/network-scripts/ifcfg-<interface_name> on RHEL/CentOS, or /etc/net‐ work/interfaces on Debian/Ubuntu/etc.). Use these with caution. The container images installed using the download template, for instance, typically are configured for eth0 to use DHCP, which will conflict with static IP addresses set at the container level. NOTE: For LXC < 1.0.7 and DHCP support, set ipv4.gateway: 'auto' is your network profile, ie.: lxc.network_profile.nic: debian: eth0: link: lxcbr0 ipv4.gateway: 'auto' Old lxc support (<1.0.7) With saltstack 2015.5.2 and above, normally the setting is autoselected, but before, you'll need to teach your network profile to set lxc.network.ipv4.gateway to auto when using a classic ipv4 configuration. Thus you'll need lxc.network_profile.foo: etho: link: lxcbr0 ipv4.gateway: auto Tricky network setups Examples This example covers how to make a container with both an internal ip and a public routable ip, wired on two veth pairs. The another interface which receives directly a public routable ip can't be on the first interface that we reserve for private inter LXC networking. lxc.network_profile.foo: eth0: {gateway: null, bridge: lxcbr0} eth1: # replace that by your main interface 'link': 'br0' 'mac': '00:16:5b:01:24:e1' 'gateway': '2.20.9.14' 'ipv4': '2.20.9.1' Creating a Container on the CLI From a Template LXC is commonly distributed with several template scripts in /usr/share/lxc/templates. Some distros may package these separately in an lxc-templates package, so make sure to check if this is the case. There are LXC template scripts for several different operating systems, but some of them are designed to use tools specific to a given distribution. For instance, the ubuntu tem‐ plate uses deb_bootstrap, the centos template uses yum, etc., making these templates impractical when a container from a different OS is desired. The lxc.create function is used to create containers using a template script. To create a CentOS container named container1 on a CentOS minion named mycentosminion, using the cen‐ tos LXC template, one can simply run the following command: salt mycentosminion lxc.create container1 template=centos For these instances, there is a download template which retrieves minimal container images for several different operating systems. To use this template, it is necessary to provide an options parameter when creating the container, with three values: 1. dist - the Linux distribution (i.e. ubuntu or centos) 2. release - the release name/version (i.e. trusty or 6) 3. arch - CPU architecture (i.e. amd64 or i386) The lxc.images function (new in version 2015.5.0) can be used to list the available images. Alternatively, the releases can be viewed on http://images.linuxcontainers.org/images/. The images are organized in such a way that the dist, release, and arch can be determined using the following URL format: http://images.linuxcontainers.org/images/dist/release/arch. For example, http://images.linuxcontainers.org/images/centos/6/amd64 would correspond to a dist of cen‐ tos, a release of 6, and an arch of amd64. Therefore, to use the download template to create a new 64-bit CentOS 6 container, the following command can be used: salt myminion lxc.create container1 template=download options='{dist: centos, release: 6, ↲ arch: amd64}' NOTE: These command-line options can be placed into a container profile, like so: lxc.container_profile.cent6: template: download options: dist: centos release: 6 arch: amd64 The options parameter is not supported in profiles for the 2014.7.x release cycle and earlier, so it would still need to be provided on the command-line. Cloning an Existing Container To clone a container, use the lxc.clone function: salt myminion lxc.clone container2 orig=container1 Using a Container Image While cloning is a good way to create new containers from a common base container, the source container that is being cloned needs to already exist on the minion. This makes deploying a common container across minions difficult. For this reason, Salt's lxc.create is capable of installing a container from a tar archive of another container's rootfs. To create an image of a container named cent6, run the following command as root: tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs NOTE: Before doing this, it is recommended that the container is stopped. The resulting tarball can then be placed alongside the files in the salt fileserver and referenced using a salt:// URL. To create a container using an image, use the image param‐ eter with lxc.create: salt myminion lxc.create new-cent6 image=salt://path/to/cent6.tar.gz NOTE: Making images of containers with LVM backing For containers with LVM backing, the rootfs is not mounted, so it is necessary to mount it first before creating the tar archive. When a container is created using LVM back‐ ing, an empty rootfs dir is handily created within /var/lib/lxc/container_name, so this can be used as the mountpoint. The location of the logical volume for the container will be /dev/vgname/lvname, where vgname is the name of the volume group, and lvname is the name of the logical volume. Therefore, assuming a volume group of vg1, a logical volume of lxc-cent6, and a container name of cent6, the following commands can be used to create a tar archive of the rootfs: mount /dev/vg1/lxc-cent6 /var/lib/lxc/cent6/rootfs tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs umount /var/lib/lxc/cent6/rootfs WARNING: One caveat of using this method of container creation is that /etc/hosts is left unmod‐ ified. This could cause confusion for some distros if salt-minion is later installed on the container, as the functions that determine the hostname take /etc/hosts into account. Additionally, when creating an rootfs image, be sure to remove /etc/salt/minion_id and make sure that id is not defined in /etc/salt/minion, as this will cause similar issues. Initializing a New Container as a Salt Minion The above examples illustrate a few ways to create containers on the CLI, but often it is desirable to also have the new container run as a Minion. To do this, the lxc.init func‐ tion can be used. This function will do the following: 1. Create a new container 2. Optionally set password and/or DNS 3. Bootstrap the minion (using either salt-bootstrap or a custom command) By default, the new container will be pointed at the same Salt Master as the host machine on which the container was created. It will then request to authenticate with the Master like any other bootstrapped Minion, at which point it can be accepted. salt myminion lxc.init test1 profile=centos salt-key -a test1 For even greater convenience, the LXC runner contains a runner function of the same name (lxc.init), which creates a keypair, seeds the new minion with it, and pre-accepts the key, allowing for the new Minion to be created and authorized in a single step: salt-run lxc.init test1 host=myminion profile=centos Running Commands Within a Container For containers which are not running their own Minion, commands can be run within the con‐ tainer in a manner similar to using (cmd.run <salt.modules.cmdmod.run). The means of doing this have been changed significantly in version 2015.5.0 (though the deprecated behavior will still be supported for a few releases). Both the old and new usage are documented below. 2015.5.0 and Newer New functions have been added to mimic the behavior of the functions in the cmd module. Below is a table with the cmd functions and their lxc module equivalents: ┌──────────────────────┬────────────────┬────────────────┐ │Description │ cmd module │ lxc module │ ├──────────────────────┼────────────────┼────────────────┤ │Run a command and get │ cmd.run │ lxc.run │ │all output │ │ │ ├──────────────────────┼────────────────┼────────────────┤ │Run a command and get │ cmd.run_stdout │ lxc.run_stdout │ │just stdout │ │ │ ├──────────────────────┼────────────────┼────────────────┤ │Run a command and get │ cmd.run_stderr │ lxc.run_stderr │ │just stderr │ │ │ ├──────────────────────┼────────────────┼────────────────┤ │Run a command and get │ cmd.retcode │ lxc.retcode │ │just the retcode │ │ │ ├──────────────────────┼────────────────┼────────────────┤ │Run a command and get │ cmd.run_all │ lxc.run_all │ │all information │ │ │ └──────────────────────┴────────────────┴────────────────┘ 2014.7.x and Earlier Earlier Salt releases use a single function (lxc.run_cmd) to run commands within contain‐ ers. Whether stdout, stderr, etc. are returned depends on how the function is invoked. To run a command and return the stdout: salt myminion lxc.run_cmd web1 'tail /var/log/messages' To run a command and return the stderr: salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=True To run a command and return the retcode: salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=False To run a command and return all information: salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=True stderr=True Container Management Using salt-cloud Salt cloud uses under the hood the salt runner and module to manage containers, Please look at this chapter Container Management Using States Several states are being renamed or otherwise modified in version 2015.5.0. The informa‐ tion in this tutorial refers to the new states. For 2014.7.x and earlier, please refer to the documentation for the LXC states. Ensuring a Container Is Present To ensure the existence of a named container, use the lxc.present state. Here are some examples: # Using a template web1: lxc.present: - template: download - options: dist: centos release: 6 arch: amd64 # Cloning web2: lxc.present: - clone_from: web-base # Using a rootfs image web3: lxc.present: - image: salt://path/to/cent6.tar.gz # Using profiles web4: lxc.present: - profile: centos_web - network_profile: centos WARNING: The lxc.present state will not modify an existing container (in other words, it will not re-create the container). If an lxc.present state is run on an existing container, there will be no change and the state will return a True result. The lxc.present state also includes an optional running parameter which can be used to ensure that a container is running/stopped. Note that there are standalone lxc.running and lxc.stopped states which can be used for this purpose. Ensuring a Container Does Not Exist To ensure that a named container is not present, use the lxc.absent state. For example: web1: lxc.absent Ensuring a Container is Running/Stopped/Frozen Containers can be in one of three states: · running - Container is running and active · frozen - Container is running, but all process are blocked and the container is essen‐ tially non-active until the container is "unfrozen" · stopped - Container is not running Salt has three states (lxc.running, lxc.frozen, and lxc.stopped) which can be used to ensure a container is in one of these states: web1: lxc.running # Restart the container if it was already running web2: lxc.running: - restart: True web3: lxc.stopped # Explicitly kill all tasks in container instead of gracefully stopping web4: lxc.stopped: - kill: True web5: lxc.frozen # If container is stopped, do not start it (in which case the state will fail) web6: lxc.frozen: - start: False Using Salt with Stormpath Stormpath is a user management and authentication service. This tutorial covers using SaltStack to manage and take advantage of Stormpath's features. External Authentication Stormpath can be used for Salt's external authentication system. In order to do this, the master should be configured with an apiid, apikey, and the ID of the application that is associated with the users to be authenticated: stormpath: apiid: 367DFSF4FRJ8767FSF4G34FGH apikey: FEFREF43t3FEFRe/f323fwer4FWF3445gferWRWEer1 application: 786786FREFrefreg435fr1 NOTE: These values can be found in the Stormpath dashboard <https://api.storm‐ path.com/ui2/index.html#/>`_. Users that are to be authenticated should be set up under the stormpath dict under exter‐ nal_auth: external_auth: stormpath: larry: - .* - '@runner' - '@wheel' Keep in mind that while Stormpath defaults the username associated with the account to the email address, it is better to use a username without an @ sign in it. Configuring Stormpath Modules Stormpath accounts can be managed via either an execution or state module. In order to use either, a minion must be configured with an API ID and key. stormpath: apiid: 367DFSF4FRJ8767FSF4G34FGH apikey: FEFREF43t3FEFRe/f323fwer4FWF3445gferWRWEer1 directory: efreg435fr1786786FREFr application: 786786FREFrefreg435fr1 Some functions in the stormpath modules can make use of other options. The following options are also available. directory The ID of the directory that is to be used with this minion. Many functions require an ID to be specified to do their work. However, if the ID of a directory is specified, then Salt can often look up the resource in question. application The ID of the application that is to be used with this minion. Many functions require an ID to be specified to do their work. However, if the ID of a application is specified, then Salt can often look up the resource in question. Managing Stormpath Accounts With the stormpath configuration in place, Salt can be used to configure accounts (which may be thought of as users) on the Stormpath service. The following functions are avail‐ able. stormpath.create_account Create an account on the Stormpath service. This requires a directory_id as the first argument; it will not be retrieved from the minion configuration. An email address, pass‐ word, first name (givenName) and last name (surname) are also required. For the full list of other parameters that may be specified, see: http://docs.stormpath.com/rest/product-guide/#account-resource When executed with no errors, this function will return the information about the account, from Stormpath. salt myminion stormpath.create_account <directory_id> @example.com letmein Shemp Howa ↲ rd stormpath.list_accounts Show all accounts on the Stormpath service. This will return all accounts, regardless of directory, application, or group. salt myminion stormpath.list_accounts ''' stormpath.show_account Show the details for a specific Stormpath account. An account_id is normally required. However, if am email is provided instead, along with either a directory_id, applica‐ tion_id, or group_id, then Salt will search the specified resource to try and locate the account_id. salt myminion stormpath.show_account <account_id> salt myminion stormpath.show_account email=<email> directory_id=<directory_id> stormpath.update_account Update one or more items for this account. Specifying an empty value will clear it for that account. This function may be used in one of two ways. In order to update only one key/value pair, specify them in order: salt myminion stormpath.update_account <account_id> givenName shemp salt myminion stormpath.update_account <account_id> middleName '' In order to specify multiple items, they need to be passed in as a dict. From the command line, it is best to do this as a JSON string: salt myminion stormpath.update_account <account_id> items='{"givenName": "Shemp"} salt myminion stormpath.update_account <account_id> items='{"middlename": ""} When executed with no errors, this function will return the information about the account, from Stormpath. stormpath.delete_account Delete an account from Stormpath. salt myminion stormpath.delete_account <account_id> stormpath.list_directories Show all directories associated with this tenant. salt myminion stormpath.list_directories Using Stormpath States Stormpath resources may be managed using the state system. The following states are avail‐ able. stormpath_account.present Ensure that an account exists on the Stormpath service. All options that are available with the stormpath.create_account function are available here. If an account needs to be created, then this function will require the same fields that stormpath.create_account requires, including the password. However, if a password changes for an existing account, it will NOT be updated by this state. @example.com: stormpath_account.present: - directory_id: efreg435fr1786786FREFr - password: badpass - firstName: Curly - surname: Howard - nickname: curly It is advisable to always set a nickname that is not also an email address, so that it can be used by Salt's external authentication module. stormpath_account.absent Ensure that an account does not exist on Stormpath. As with stormpath_account.present, the name supplied to this state is the email address associated with this account. Salt will use this, with or without the directory ID that is configured for the minion. However, lookups will be much faster with a directory ID specified. Salt Virt Salt as a Cloud Controller In Salt 0.14.0, an advanced cloud control system were introduced, allow private cloud vms to be managed directly with Salt. This system is generally referred to as Salt Virt. The Salt Virt system already exists and is installed within Salt itself, this means that beside setting up Salt, no additional salt code needs to be deployed. The main goal of Salt Virt is to facilitate a very fast and simple cloud. The cloud that can scale and fully featured. Salt Virt comes with the ability to set up and manage com‐ plex virtual machine networking, powerful image, and disk management, as well as virtual machine migration with and without shared storage. This means that Salt Virt can be used to create a cloud from a blade center and a SAN, but can also create a cloud out of a swarm of Linux Desktops without a single shared storage system. Salt Virt can make clouds from truly commodity hardware, but can also stand up the power of specialized hardware as well. Setting up Hypervisors The first step to set up the hypervisors involves getting the correct software installed and setting up the hypervisor network interfaces. Installing Hypervisor Software Salt Virt is made to be hypervisor agnostic but currently the only fully implemented hypervisor is KVM via libvirt. The required software for a hypervisor is libvirt and kvm. For advanced features install libguestfs or qemu-nbd. NOTE: Libguestfs and qemu-nbd allow for virtual machine images to be mounted before startup and get pre-seeded with configurations and a salt minion This sls will set up the needed software for a hypervisor, and run the routines to set up the libvirt pki keys. NOTE: Package names and setup used is Red Hat specific, different package names will be required for different platforms libvirt: pkg.installed: [] file.managed: - name: /etc/sysconfig/libvirtd - contents: 'LIBVIRTD_ARGS="--listen"' - require: - pkg: libvirt libvirt.keys: - require: - pkg: libvirt service.running: - name: libvirtd - require: - pkg: libvirt - network: br0 - libvirt: libvirt - watch: - file: libvirt libvirt-python: pkg.installed: [] libguestfs: pkg.installed: - pkgs: - libguestfs - libguestfs-tools Hypervisor Network Setup The hypervisors will need to be running a network bridge to serve up network devices for virtual machines, this formula will set up a standard bridge on a hypervisor connecting the bridge to eth0: eth0: network.managed: - enabled: True - type: eth - bridge: br0 br0: network.managed: - enabled: True - type: bridge - proto: dhcp - require: - network: eth0 Virtual Machine Network Setup Salt Virt comes with a system to model the network interfaces used by the deployed virtual machines; by default a single interface is created for the deployed virtual machine and is bridged to br0. To get going with the default networking setup, ensure that the bridge interface named br0 exists on the hypervisor and is bridged to an active network device. NOTE: To use more advanced networking in Salt Virt, read the Salt Virt Networking document: Salt Virt Networking Libvirt State One of the challenges of deploying a libvirt based cloud is the distribution of libvirt certificates. These certificates allow for virtual machine migration. Salt comes with a system used to auto deploy these certificates. Salt manages the signing authority key and generates keys for libvirt clients on the master, signs them with the certificate author‐ ity and uses pillar to distribute them. This is managed via the libvirt state. Simply exe‐ cute this formula on the minion to ensure that the certificate is in place and up to date: NOTE: The above formula includes the calls needed to set up libvirt keys. libvirt_keys: libvirt.keys Getting Virtual Machine Images Ready Salt Virt, requires that virtual machine images be provided as these are not generated on the fly. Generating these virtual machine images differs greatly based on the underlying platform. Virtual machine images can be manually created using KVM and running through the install‐ er, but this process is not recommended since it is very manual and prone to errors. Virtual Machine generation applications are available for many platforms: vm-builder: https://wiki.debian.org/VMBuilder SEE ALSO: vmbuilder-formula Once virtual machine images are available, the easiest way to make them available to Salt Virt is to place them in the Salt file server. Just copy an image into /srv/salt and it can now be used by Salt Virt. For purposes of this demo, the file name centos.img will be used. Existing Virtual Machine Images Many existing Linux distributions distribute virtual machine images which can be used with Salt Virt. Please be advised that NONE OF THESE IMAGES ARE SUPPORTED BY SALTSTACK. CentOS These images have been prepared for OpenNebula but should work without issue with Salt Virt, only the raw qcow image file is needed: http://wiki.centos.org/Cloud/OpenNebula Fedora Linux Images for Fedora Linux can be found here: http://fedoraproject.org/en/get-fedora#clouds Ubuntu Linux Images for Ubuntu Linux can be found here: http://cloud-images.ubuntu.com/ Using Salt Virt With hypervisors set up and virtual machine images ready, Salt can start issuing cloud commands. Start by running a Salt Virt hypervisor info command: salt-run virt.hyper_info This will query what the running hypervisor stats are and display information for all con‐ figured hypervisors. This command will also validate that the hypervisors are properly configured. Now that hypervisors are available a virtual machine can be provisioned. The virt.init routine will create a new virtual machine: salt-run virt.init centos1 2 512 salt://centos.img This command assumes that the CentOS virtual machine image is sitting in the root of the Salt fileserver. Salt Virt will now select a hypervisor to deploy the new virtual machine on and copy the virtual machine image down to the hypervisor. Once the VM image has been copied down the new virtual machine will be seeded. Seeding the VMs involves setting pre-authenticated Salt keys on the new VM and if needed, will install the Salt Minion on the new VM before it is started. NOTE: The biggest bottleneck in starting VMs is when the Salt Minion needs to be installed. Making sure that the source VM images already have Salt installed will GREATLY speed up virtual machine deployment. Now that the new VM has been prepared, it can be seen via the virt.query command: salt-run virt.query This command will return data about all of the hypervisors and respective virtual machines. Now that the new VM is booted it should have contacted the Salt Master, a test.ping will reveal if the new VM is running. Migrating Virtual Machines Salt Virt comes with full support for virtual machine migration, and using the libvirt state in the above formula makes migration possible. A few things need to be available to support migration. Many operating systems turn on firewalls when originally set up, the firewall needs to be opened up to allow for libvirt and kvm to cross communicate and execution migration routines. On Red Hat based hypervi‐ sors in particular port 16514 needs to be opened on hypervisors: iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 16514 -j ACCEPT NOTE: More in-depth information regarding distribution specific firewall settings can read in: Opening the Firewall up for Salt Salt also needs an additional flag to be turned on as well. The virt.tunnel option needs to be turned on. This flag tells Salt to run migrations securely via the libvirt TLS tun‐ nel and to use port 16514. Without virt.tunnel libvirt tries to bind to random ports when running migrations. To turn on virt.tunnel simple apply it to the master config file: virt.tunnel: True Once the master config has been updated, restart the master and send out a call to the minions to refresh the pillar to pick up on the change: salt \* saltutil.refresh_modules Now, migration routines can be run! To migrate a VM, simply run the Salt Virt migrate rou‐ tine: salt-run virt.migrate centos <new hypervisor> VNC Consoles Salt Virt also sets up VNC consoles by default, allowing for remote visual consoles to be oped up. The information from a virt.query routine will display the vnc console port for the specific vms: centos CPU: 2 Memory: 524288 State: running Graphics: vnc - hyper6:5900 Disk - vda: Size: 2.0G File: /srv/salt-images/ubuntu2/system.qcow2 File Format: qcow2 Nic - ac:de:48:98:08:77: Source: br0 Type: bridge The line Graphics: vnc - hyper6:5900 holds the key. First the port named, in this case 5900, will need to be available in the hypervisor's firewall. Once the port is open, then the console can be easily opened via vncviewer: vncviewer hyper6:5900 By default there is no VNC security set up on these ports, which suggests that keeping them firewalled and mandating that SSH tunnels be used to access these VNC interfaces. Keep in mind that activity on a VNC interface that is accessed can be viewed by any other user that accesses that same VNC interface, and any other user logging in can also operate with the logged in user on the virtual machine. Conclusion Now with Salt Virt running, new hypervisors can be seamlessly added just by running the above states on new bare metal machines, and these machines will be instantly available to Salt Virt. LXC ESXi Proxy Minion ESXi Proxy Minion New in version 2015.8.4. NOTE: This tutorial assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough. This tutorial also assumes a basic understanding of Salt Proxy Minions. If you're unfa‐ miliar with Salt's Proxy Minion system, please read the Salt Proxy Minion documentation and the Salt Proxy Minion End-to-End Example tutorial. The third assumption that this tutorial makes is that you also have a basic understand‐ ing of ESXi hosts. You can learn more about ESXi hosts on VMware's various resources. Salt's ESXi Proxy Minion allows a VMware ESXi host to be treated as an individual Salt Minion, without installing a Salt Minion on the ESXi host. Since an ESXi host may not necessarily run on an OS capable of hosting a Python stack, the ESXi host can't run a regular Salt Minion directly. Therefore, Salt's Proxy Minion func‐ tionality enables you to designate another machine to host a proxy process that "proxies" communication from the Salt Master to the ESXi host. The master does not know or care that the ESXi target is not a "real" Salt Minion. More in-depth conceptual reading on Proxy Minions can be found in the Proxy Minion section of Salt's documentation. Salt's ESXi Proxy Minion was added in the 2015.8.4 release of Salt. NOTE: Be aware that some functionality for the ESXi Proxy Minion may depend on the type of license attached the ESXi host(s). For example, certain services are only available to manipulate service state or poli‐ cies with a VMware vSphere Enterprise or Enterprise Plus license, while others are available with a Standard license. The ntpd service is restricted to an Enterprise Plus license, while ssh is available via the Standard license. Please see the vSphere Comparison page for more information. Dependencies Manipulation of the ESXi host via a Proxy Minion requires the machine running the Proxy Minion process to have the ESXCLI package (and all of it's dependencies) and the pyVmomi Python Library to be installed. ESXi Password The ESXi Proxy Minion uses VMware's API to perform tasks on the host as if it was a regu‐ lar Salt Minion. In order to access the API that is already running on the ESXi host, the ESXi host must have a username and password that is used to log into the host. The user‐ name is usually root. Before Salt can access the ESXi host via VMware's API, a default password must be set on the host. pyVmomi The pyVmomi Python library must be installed on the machine that is running the proxy process. pyVmomi can be installed via pip: pip install pyVmomi NOTE: Version 6.0 of pyVmomi has some problems with SSL error handling on certain versions of Python. If using version 6.0 of pyVmomi, the machine that you are running the proxy minion process from must have either Python 2.6, Python 2.7.9, or newer. This is due to an upstream dependency in pyVmomi 6.0 that is not supported in Python version 2.7 to 2.7.8. If the version of Python running the proxy process is not in the supported range, you will need to install an earlier version of pyVmomi. See Issue #29537 for more information. Based on the note above, to install an earlier version of pyVmomi than the version cur‐ rently listed in PyPi, run the following: pip install pyVmomi==5.5.0.2014.1.1 The 5.5.0.2014.1.1 is a known stable version that the original ESXi Proxy Minion was developed against. ESXCLI Currently, about a third of the functions used for the ESXi Proxy Minion require the ESX‐ CLI package be installed on the machine running the Proxy Minion process. The ESXCLI package is also referred to as the VMware vSphere CLI, or vCLI. VMware provides vCLI package installation instructions for vSphere 5.5 and vSphere 6.0. Once all of the required dependencies are in place and the vCLI package is installed, you can check to see if you can connect to your ESXi host by running the following command: esxcli -s <host-location> -u <username> -p <password> system syslog config get If the connection was successful, ESXCLI was successfully installed on your system. You should see output related to the ESXi host's syslog configuration. Configuration There are several places where various configuration values need to be set in order for the ESXi Proxy Minion to run and connect properly. Proxy Config File On the machine that will be running the Proxy Minon process(es), a proxy config file must be in place. This file should be located in the /etc/salt/ directory and should be named proxy. If the file is not there by default, create it. This file should contain the location of your Salt Master that the Salt Proxy will connect to. NOTE: If you're running your ESXi Proxy Minion on version of Salt that is 2015.8.4 or newer, you also need to set add_proxymodule_to_opts: False in your proxy config file. The need to specify this configuration will be removed with Salt Boron, the next major feature release. See the New in 2015.8.2 section of the Proxy Minion documentation for more information. Example Proxy Config File: # /etc/salt/proxy master: <salt-master-location> add_proxymodule_to_opts: False Pillar Profiles Proxy minions get their configuration from Salt's Pillar. Every proxy must have a stanza in Pillar and a reference in the Pillar top-file that matches the Proxy ID. At a minimum for communication with the ESXi host, the pillar should look like this: proxy: proxytype: esxi host: <ip or dns name of esxi host> username: <ESXi username> passwords: - first_password - second_password - third_password Some other optional settings are protocol and port. These can be added to the pillar con‐ figuration. proxytype The proxytype key and value pair is critical, as it tells Salt which interface to load from the proxy directory in Salt's install hierarchy, or from /srv/salt/_proxy on the Salt Master (if you have created your own proxy module, for example). To use this ESXi Proxy Module, set this to esxi. host The location, or ip/dns, of the ESXi host. Required. username The username used to login to the ESXi host, such as root. Required. passwords A list of passwords to be used to try and login to the ESXi host. At least one password in this list is required. The proxy integration will try the passwords listed in order. It is configured this way so you can have a regular password and the password you may be updating for an ESXi host either via the vsphere.update_host_password execution module function or via the esxi.password_present state function. This way, after the password is changed, you should not need to restart the proxy minion--it should just pick up the the new password provided in the list. You can then change pillar at will to move that password to the front and retire the unused ones. Use-case/reasoning for using a list of passwords: You are setting up an ESXi host for the first time, and the host comes with a default password. You know that you'll be changing this password during your initial setup from the default to a new password. If you only have one password option, and if you have a state changing the password, any remote execu‐ tion commands or states that run after the password change will not be able to run on the host until the password is updated in Pillar and the Proxy Minion process is restarted. This allows you to use any number of potential fallback passwords. NOTE: When a password is changed on the host to one in the list of possible passwords, the further down on the list the password is, the longer individual commands will take to return. This is due to the nature of pyVmomi's login system. We have to wait for the first attempt to fail before trying the next password on the list. This scenario is especially true, and even slower, when the proxy minion first starts. If the correct password is not the first password on the list, it may take up to a minute for test.ping to respond with a True result. Once the initial authorization is complete, the responses for commands will be a little faster. To avoid these longer waiting periods, SaltStack recommends moving the correct password to the top of the list and restarting the proxy minion at your earliest convenience. protocol If the ESXi host is not using the default protocol, set this value to an alternate proto‐ col. Default is https. For example: port If the ESXi host is not using the default port, set this value to an alternate port. Default is 443. Example Configuration Files An example of all of the basic configurations that need to be in place before starting the Proxy Minion processes includes the Proxy Config File, Pillar Top File, and any individual Proxy Minion Pillar files. In this example, we'll assuming there are two ESXi hosts to connect to. Therefore, we'll be creating two Proxy Minion config files, one config for each ESXi host. Proxy Config File: # /etc/salt/proxy master: <salt-master-location> add_proxymodule_to_opts: False Pillar Top File: # /srv/pillar/top.sls base: 'esxi-1': - esxi-1 'esxi-2': - esxi-2 Pillar Config File for the first ESXi host, esxi-1: # /srv/pillar/esxi-1.sls proxy: proxytype: esxi host: esxi-1.example.com username: 'root' passwords: - bad-password-1 - backup-bad-password-1 Pillar Config File for the second ESXi host, esxi-2: # /srv/pillar/esxi-2.sls proxy: proxytype: esxi host: esxi-2.example.com username: 'root' passwords: - bad-password-2 - backup-bad-password-2 Starting the Proxy Minion Once all of the correct configuration files are in place, it is time to start the proxy processes! 1. First, make sure your Salt Master is running. 2. Start the first Salt Proxy, in debug mode, by giving the Proxy Minion process and ID that matches the config file name created in the Configuration section. salt-proxy --proxyid='esxi-1' -l debug 1. Accept the esxi-1 Proxy Minion's key on the Salt Master: # salt-key -L Accepted Keys: Denied Keys: Unaccepted Keys: esxi-1 Rejected Keys: # # salt-key -a esxi-1 The following keys are going to be accepted: Unaccepted Keys: esxi-1 Proceed? [n/Y] y Key for minion esxi-1 accepted. 1. Repeat for the second Salt Proxy, this time we'll run the proxy process as a daemon, as an example. salt-proxy --proxyid='esxi-2' -d 1. Accept the esxi-2 Proxy Minion's key on the Salt Master: # salt-key -L Accepted Keys: esxi-1 Denied Keys: Unaccepted Keys: esxi-2 Rejected Keys: # # salt-key -a esxi-1 The following keys are going to be accepted: Unaccepted Keys: esxi-2 Proceed? [n/Y] y Key for minion esxi-1 accepted. 1. Check and see if your Proxy Minions are responding: # salt 'esxi-*' test.ping esxi-1: True esxi-3: True Executing Commands Now that you've configured your Proxy Minions and have them responding successfully to a test.ping, we can start executing commands against the ESXi hosts via Salt. It's important to understand how this particular proxy works, and there are a couple of important pieces to be aware of in order to start running remote execution and state com‐ mands against the ESXi host via a Proxy Minion: the vSphere Execution Module, the ESXi Execution Module, and the ESXi State Module. vSphere Execution Module The Salt.modules.vsphere is a standard Salt execution module that does the bulk of the work for the ESXi Proxy Minion. If you pull up the docs for it you'll see that almost every function in the module takes credentials (username and password) and a target host argument. When credentials and a host aren't passed, Salt runs commands through pyVmomi or ESXCLI against the local machine. If you wanted, you could run functions from this module on any machine where an appropriate version of pyVmomi and ESXCLI are installed, and that machine would reach out over the network and communicate with the ESXi host. You'll notice that most of the functions in the vSphere module require a host, username, and password. These parameters are contained in the Pillar files and passed through to the function via the proxy process that is already running. You don't need to provide these parameters when you execute the commands. See the Running Remote Execution Commands sec‐ tion below for an example. ESXi Execution Module In order for the Pillar information set up in the Configuration section above to be passed to the function call in the vSphere Execution Module, the salt.modules.esxi execution mod‐ ule acts as a "shim" between the vSphere execution module functions and the proxy process. The "shim" takes the authentication credentials specified in the Pillar files and passes them through to the host, username, password, and optional protocol and port options required by the vSphere Execution Module functions. If the function takes more positional, or keyword, arguments you can append them to the call. It's this shim that speaks to the ESXi host through the proxy, arranging for the credentials and hostname to be pulled from the Pillar section for the ESXi Proxy Minion. Because of the presence of the shim, to lookup documentation for what functions you can use to interface with the ESXi host, you'll want to look in salt.modules.vsphere instead of salt.modules.esxi. Running Remote Execution Commands To run commands from the Salt Master to execute, via the ESXi Proxy Minion, against the ESXi host, you use the esxi.cmd <vsphere-function-name> syntax to call functions located in the vSphere Execution Module. Both args and kwargs needed for various vsphere execution module functions must be passed through in a kwarg- type manor. For example: salt 'esxi-*' esxi.cmd system_info salt 'exsi-*' esxi.cmd get_service_running service_name='ssh' ESXi State Module The ESXi State Module functions similarly to other state modules. The "shim" provided by the ESXi Execution Module passes the necessary host, username, and password credentials through, so those options don't need to be provided in the state. Other than that, state files are written and executed just like any other Salt state. See the salt.modules.esxi state for ESXi state functions. The follow state file is an example of how to configure various pieces of an ESXi host including enabling SSH, uploading and SSH key, configuring a coredump network config, sys‐ log, ntp, enabling VMotion, resetting a host password, and more. # /srv/salt/configure-esxi.sls configure-host-ssh: esxi.ssh_configured: - service_running: True - ssh_key_file: /etc/salt/ssh_keys/my_key.pub - service_policy: 'automatic' - service_restart: True - certificate_verify: True configure-host-coredump: esxi.coredump_configured: - enabled: True - dump_ip: 'my-coredump-ip.example.com' configure-host-syslog: esxi.syslog_configured: - syslog_configs: loghost: ssl://localhost:5432,tcp://10.1.0.1:1514 default-timeout: 120 - firewall: True - reset_service: True - reset_syslog_config: True - reset_configs: loghost,default-timeout configure-host-ntp: esxi.ntp_configured: - service_running: True - ntp_servers: - 192.174.1.100 - 192.174.1.200 - service_policy: 'automatic' - service_restart: True configure-vmotion: esxi.vmotion_configured: - enabled: True configure-host-vsan: esxi.vsan_configured: - enabled: True - add_disks_to_vsan: True configure-host-password: esxi.password_present: - password: 'new-bad-password' States are called via the ESXi Proxy Minion just as they would on a regular minion. For example: salt 'esxi-*' state.sls configure-esxi test=true salt 'esxi-*' state.sls configure-esxi Relevant Salt Files and Resources · ESXi Proxy Minion · ESXi Execution Module · ESXi State Module · Salt Proxy Minion Docs · Salt Proxy Minion End-to-End Example · vSphere Execution Module Using Salt at scale Using Salt at scale The focus of this tutorial will be building a Salt infrastructure for handling large num‐ bers of minions. This will include tuning, topology, and best practices. For how to install the Salt Master please go here: Installing saltstack NOTE: This tutorial is intended for large installations, although these same settings won't hurt, it may not be worth the complexity to smaller installations. When used with minions, the term 'many' refers to at least a thousand and 'a few' always means 500. For simplicity reasons, this tutorial will default to the standard ports used by Salt. The Master The most common problems on the Salt Master are: 1. too many minions authing at once 2. too many minions re-authing at once 3. too many minions re-connecting at once 4. too many minions returning at once 5. too few resources (CPU/HDD) The first three are all "thundering herd" problems. To mitigate these issues we must con‐ figure the minions to back-off appropriately when the Master is under heavy load. The fourth is caused by masters with little hardware resources in combination with a pos‐ sible bug in ZeroMQ. At least that's what it looks like till today (Issue 118651, Issue 5948, Mail thread) To fully understand each problem, it is important to understand, how Salt works. Very briefly, the Salt Master offers two services to the minions. · a job publisher on port 4505 · an open port 4506 to receive the minions returns All minions are always connected to the publisher on port 4505 and only connect to the open return port 4506 if necessary. On an idle Master, there will only be connections on port 4505. Too many minions authing When the Minion service is first started up, it will connect to its Master's publisher on port 4505. If too many minions are started at once, this can cause a "thundering herd". This can be avoided by not starting too many minions at once. The connection itself usually isn't the culprit, the more likely cause of master-side issues is the authentication that the Minion must do with the Master. If the Master is too heavily loaded to handle the auth request it will time it out. The Minion will then wait acceptance_wait_time to retry. If acceptance_wait_time_max is set then the Minion will increase its wait time by the acceptance_wait_time each subsequent retry until reaching acceptance_wait_time_max. Too many minions re-authing This is most likely to happen in the testing phase of a Salt deployment, when all Minion keys have already been accepted, but the framework is being tested and parameters are fre‐ quently changed in the Salt Master's configuration file(s). The Salt Master generates a new AES key to encrypt its publications at certain events such as a Master restart or the removal of a Minion key. If you are encountering this problem of too many minions re-authing against the Master, you will need to recalibrate your setup to reduce the rate of events like a Master restart or Minion key removal (salt-key -d). When the Master generates a new AES key, the minions aren't notified of this but will dis‐ cover it on the next pub job they receive. When the Minion receives such a job it will then re-auth with the Master. Since Salt does minion-side filtering this means that all the minions will re-auth on the next command published on the master-- causing another "thundering herd". This can be avoided by setting the random_reauth_delay: 60 in the minions configuration file to a higher value and stagger the amount of re-auth attempts. Increasing this value will of course increase the time it takes until all min‐ ions are reachable via Salt commands. Too many minions re-connecting By default the zmq socket will re-connect every 100ms which for some larger installations may be too quick. This will control how quickly the TCP session is re-established, but has no bearing on the auth load. To tune the minions sockets reconnect attempts, there are a few values in the sample con‐ figuration file (default values) recon_default: 100ms recon_max: 5000 recon_randomize: True · recon_default: the default value the socket should use, i.e. 100ms · recon_max: the max value that the socket should use as a delay before trying to recon‐ nect · recon_randomize: enables randomization between recon_default and recon_max To tune this values to an existing environment, a few decision have to be made. 1. How long can one wait, before the minions should be online and reachable via Salt? 2. How many reconnects can the Master handle without a syn flood? These questions can not be answered generally. Their answers depend on the hardware and the administrators requirements. Here is an example scenario with the goal, to have all minions reconnect within a 60 sec‐ ond time-frame on a Salt Master service restart. recon_default: 1000 recon_max: 59000 recon_randomize: True Each Minion will have a randomized reconnect value between 'recon_default' and 'recon_default + recon_max', which in this example means between 1000ms and 60000ms (or between 1 and 60 seconds). The generated random-value will be doubled after each attempt to reconnect (ZeroMQ default behavior). Lets say the generated random value is 11 seconds (or 11000ms). reconnect 1: wait 11 seconds reconnect 2: wait 22 seconds reconnect 3: wait 33 seconds reconnect 4: wait 44 seconds reconnect 5: wait 55 seconds reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max) reconnect 7: wait 11 seconds reconnect 8: wait 22 seconds reconnect 9: wait 33 seconds reconnect x: etc. With a thousand minions this will mean 1000/60 = ~16 round about 16 connection attempts a second. These values should be altered to values that match your environment. Keep in mind though, that it may grow over time and that more min‐ ions might raise the problem again. Too many minions returning at once This can also happen during the testing phase, if all minions are addressed at once with $ salt * disk.usage it may cause thousands of minions trying to return their data to the Salt Master open port 4506. Also causing a flood of syn-flood if the Master can't handle that many returns at once. This can be easily avoided with Salt's batch mode: $ salt * disk.usage -b 50 This will only address 50 minions at once while looping through all addressed minions. Too few resources The masters resources always have to match the environment. There is no way to give good advise without knowing the environment the Master is supposed to run in. But here are some general tuning tips for different situations: The Master is CPU bound Salt uses RSA-Key-Pairs on the masters and minions end. Both generate 4096 bit key-pairs on first start. While the key-size for the Master is currently not configurable, the min‐ ions keysize can be configured with different key-sizes. For example with a 2048 bit key: keysize: 2048 With thousands of decryptions, the amount of time that can be saved on the masters end should not be neglected. See here for reference: Pull Request 9235 how much influence the key-size can have. Downsizing the Salt Master's key is not that important, because the minions do not encrypt as many messages as the Master does. In installations with large or with complex pillar files, it is possible for the master to exhibit poor performance as a result of having to render many pillar files at once. This exhibit itself in a number of ways, both as high load on the master and on minions which block on waiting for their pillar to be delivered to them. To reduce pillar rendering times, it is possible to cache pillars on the master. To do this, see the set of master configuration options which are prefixed with pillar_cache. NOTE: Caching pillars on the master may introduce security considerations. Be certain to read caveats outlined in the master configuration file to understand how pillar caching may affect a master's ability to protect sensitive data! The Master is disk IO bound By default, the Master saves every Minion's return for every job in its job-cache. The cache can then be used later, to lookup results for previous jobs. The default directory for this is: cachedir: /var/cache/salt and then in the /proc directory. Each job return for every Minion is saved in a single file. Over time this directory can grow quite large, depending on the number of published jobs. The amount of files and directories will scale with the number of jobs published and the retention time defined by keep_jobs: 24 250 jobs/day * 2000 minions returns = 500.000 files a day If no job history is needed, the job cache can be disabled: job_cache: False If the job cache is necessary there are (currently) 2 options: · ext_job_cache: this will have the minions store their return data directly into a returner (not sent through the Master) · master_job_cache (New in 2014.7.0): this will make the Master store the job data using a returner (instead of the local job cache on disk).
TARGETING MINIONS Targeting minions is specifying which minions should run a command or execute a state by matching against hostnames, or system information, or defined groups, or even combinations thereof. For example the command salt web1 apache.signal restart to restart the Apache httpd server specifies the machine web1 as the target and the command will only be run on that one min‐ ion. Similarly when using States, the following top file specifies that only the web1 minion should execute the contents of webserver.sls: base: 'web1': - webserver There are many ways to target individual minions or groups of minions in Salt: Matching the minion id Each minion needs a unique identifier. By default when a minion starts for the first time it chooses its FQDN as that identifier. The minion id can be overridden via the minion's id configuration setting. TIP: minion id and minion keys The minion id is used to generate the minion's public/private keys and if it ever changes the master must then accept the new key as though the minion was a new host. Globbing The default matching that Salt utilizes is shell-style globbing around the minion id. This also works for states in the top file. NOTE: You must wrap salt calls that use globbing in single-quotes to prevent the shell from expanding the globs before Salt is invoked. Match all minions: salt '*' test.ping Match all minions in the example.net domain or any of the example domains: salt '*.example.net' test.ping salt '*.example.*' test.ping Match all the webN minions in the example.net domain (web1.example.net, web2.example.net … webN.example.net): salt 'web?.example.net' test.ping Match the web1 through web5 minions: salt 'web[1-5]' test.ping Match the web1 and web3 minions: salt 'web[1,3]' test.ping Match the web-x, web-y, and web-z minions: salt 'web-[x-z]' test.ping NOTE: For additional targeting methods please review the compound matchers documentation. Regular Expressions Minions can be matched using Perl-compatible regular expressions (which is globbing on steroids and a ton of caffeine). Match both web1-prod and web1-devel minions: salt -E 'web1-(prod|devel)' test.ping When using regular expressions in a State's top file, you must specify the matcher as the first option. The following example executes the contents of webserver.sls on the above-mentioned minions. base: 'web1-(prod|devel)': - match: pcre - webserver Lists At the most basic level, you can specify a flat list of minion IDs: salt -L 'web1,web2,web3' test.ping Grains Salt comes with an interface to derive information about the underlying system. This is called the grains interface, because it presents salt with grains of information. Grains are collected for the operating system, domain name, IP address, kernel, OS type, memory, and many other system properties. The grains interface is made available to Salt modules and components so that the right salt minion commands are automatically available on the right systems. Grain data is relatively static, though if system information changes (for example, if network settings are changed), or if a new value is assigned to a custom grain, grain data is refreshed. NOTE: Grains resolve to lowercase letters. For example, FOO, and foo target the same grain. IMPORTANT: See Is Targeting using Grain Data Secure? for important security information. Match all CentOS minions: salt -G 'os:CentOS' test.ping Match all minions with 64-bit CPUs, and return number of CPU cores for each matching min‐ ion: salt -G 'cpuarch:x86_64' grains.item num_cpus Additionally, globs can be used in grain matches, and grains that are nested in a dictio‐ nary can be matched by adding a colon for each level that is traversed. For example, the following will match hosts that have a grain called ec2_tags, which itself is a dict with a key named environment, which has a value that contains the word production: salt -G 'ec2_tags:environment:*production*' Listing Grains Available grains can be listed by using the 'grains.ls' module: salt '*' grains.ls Grains data can be listed by using the 'grains.items' module: salt '*' grains.items Grains in the Minion Config Grains can also be statically assigned within the minion configuration file. Just add the option grains and pass options to it: grains: roles: - webserver - memcache deployment: datacenter4 cabinet: 13 cab_u: 14-15 Then status data specific to your servers can be retrieved via Salt, or used inside of the State system for matching. It also makes targeting, in the case of the example above, sim‐ ply based on specific data about your deployment. Grains in /etc/salt/grains If you do not want to place your custom static grains in the minion config file, you can also put them in /etc/salt/grains on the minion. They are configured in the same way as in the above example, only without a top-level grains: key: roles: - webserver - memcache deployment: datacenter4 cabinet: 13 cab_u: 14-15 Matching Grains in the Top File With correctly configured grains on the Minion, the top file used in Pillar or during Highstate can be made very efficient. For example, consider the following configuration: 'node_type:web': - match: grain - webserver 'node_type:postgres': - match: grain - database 'node_type:redis': - match: grain - redis 'node_type:lb': - match: grain - lb For this example to work, you would need to have defined the grain node_type for the min‐ ions you wish to match. This simple example is nice, but too much of the code is similar. To go one step further, Jinja templating can be used to simplify the top file. {% set the_node_type = salt['grains.get']('node_type', '') %} {% if the_node_type %} 'node_type:{{ the_node_type }}': - match: grain - {{ the_node_type }} {% endif %} Using Jinja templating, only one match entry needs to be defined. NOTE: The example above uses the grains.get function to account for minions which do not have the node_type grain set. Writing Grains The grains interface is derived by executing all of the "public" functions found in the modules located in the grains package or the custom grains directory. The functions in the modules of the grains must return a Python dict, where the keys in the dict are the names of the grains and the values are the values. Custom grains should be placed in a _grains directory located under the file_roots speci‐ fied by the master config file. The default path would be /srv/salt/_grains. Custom grains will be distributed to the minions when state.highstate is run, or by executing the saltutil.sync_grains or saltutil.sync_all functions. Grains are easy to write, and only need to return a dictionary. A common approach would be code something similar to the following: #!/usr/bin/env python def yourfunction(): # initialize a grains dictionary grains = {} # Some code for logic that sets grains like grains['yourcustomgrain'] = True grains['anothergrain'] = 'somevalue' return grains Before adding a grain to Salt, consider what the grain is and remember that grains need to be static data. If the data is something that is likely to change, consider using Pillar instead. WARNING: Custom grains will not be available in the top file until after the first highstate. To make custom grains available on a minion's first highstate, it is recommended to use this example to ensure that the custom grains are synced when the minion starts. Loading Custom Grains If you have multiple functions specifying grains that are called from a main function, be sure to prepend grain function names with an underscore. This prevents Salt from including the loaded grains from the grain functions in the final grain data structure. For example, consider this custom grain file: #!/usr/bin/env python def _my_custom_grain(): my_grain = {'foo': 'bar', 'hello': 'world'} return my_grain def main(): # initialize a grains dictionary grains = {} grains['my_grains'] = _my_custom_grain() return grains The output of this example renders like so: # salt-call --local grains.items local: ---------- <Snipped for brevity> my_grains: ---------- foo: bar hello: world However, if you don't prepend the my_custom_grain function with an underscore, the func‐ tion will be rendered twice by Salt in the items output: once for the my_custom_grain call itself, and again when it is called in the main function: # salt-call --local grains.items local: ---------- <Snipped for brevity> foo: bar <Snipped for brevity> hello: world <Snipped for brevity> my_grains: ---------- foo: bar hello: world Precedence Core grains can be overridden by custom grains. As there are several ways of defining cus‐ tom grains, there is an order of precedence which should be kept in mind when defining them. The order of evaluation is as follows: 1. Core grains. 2. Custom grains in /etc/salt/grains. 3. Custom grains in /etc/salt/minion. 4. Custom grain modules in _grains directory, synced to minions. Each successive evaluation overrides the previous ones, so any grains defined by custom grains modules synced to minions that have the same name as a core grain will override that core grain. Similarly, grains from /etc/salt/minion override both core grains and custom grain modules, and grains in _grains will override any grains of the same name. Examples of Grains The core module in the grains package is where the main grains are loaded by the Salt min‐ ion and provides the principal example of how to write grains: https://github.com/saltstack/salt/blob/develop/salt/grains/core.py Syncing Grains Syncing grains can be done a number of ways, they are automatically synced when state.highstate is called, or (as noted above) the grains can be manually synced and reloaded by calling the saltutil.sync_grains or saltutil.sync_all functions. Targeting with Pillar Pillar data can be used when targeting minions. This allows for ultimate control and flex‐ ibility when targeting minions. salt -I 'somekey:specialvalue' test.ping Like with Grains, it is possible to use globbing as well as match nested values in Pillar, by adding colons for each level that is being traversed. The below example would match minions with a pillar named foo, which is a dict containing a key bar, with a value begin‐ ning with baz: salt -I 'foo:bar:baz*' test.ping Subnet/IP Address Matching Minions can easily be matched based on IP address, or by subnet (using CIDR notation). salt -S 192.168.40.20 test.ping salt -S 10.0.0.0/24 test.ping Ipcidr matching can also be used in compound matches salt -C 'S@10.0.0.0/24 and G@os:Debian' test.ping It is also possible to use in both pillar and state-matching '172.16.0.0/12': - match: ipcidr - internal NOTE: Only IPv4 matching is supported at this time. Compound matchers Compound matchers allow very granular minion targeting using any of Salt's matchers. The default matcher is a glob match, just as with CLI and top file matching. To match using anything other than a glob, prefix the match string with the appropriate letter from the table below, followed by an @ sign. ┌───────┬───────────────────┬──────────────────────────────┬────────────────┐ │Letter │ Match Type │ Example │ Alt Delimiter? │ ├───────┼───────────────────┼──────────────────────────────┼────────────────┤ │G │ Grains glob │ G@os:Ubuntu │ Yes │ ├───────┼───────────────────┼──────────────────────────────┼────────────────┤ │E │ PCRE Minion ID │ E@web\d+\.(dev|qa|prod)\.loc │ No │ ├───────┼───────────────────┼──────────────────────────────┼────────────────┤ │P │ Grains PCRE │ P@os:(RedHat|Fedora|CentOS) │ Yes │ ├───────┼───────────────────┼──────────────────────────────┼────────────────┤ │L │ List of minions │ @minion1.example.com,min‐ │ No │ │ │ │ ion3.domain.com or │ │ │ │ │ bl*.domain.com │ │ ├───────┼───────────────────┼──────────────────────────────┼────────────────┤ │I │ Pillar glob │ I@pdata:foobar │ Yes │ ├───────┼───────────────────┼──────────────────────────────┼────────────────┤ │J │ Pillar PCRE │ J@pdata:^(foo|bar)$ │ Yes │ ├───────┼───────────────────┼──────────────────────────────┼────────────────┤ │S │ Subnet/IP address │ S@192.168.1.0/24 or │ No │ │ │ │ S@192.168.1.100 │ │ ├───────┼───────────────────┼──────────────────────────────┼────────────────┤ │R │ Range cluster │ R@%foo.bar │ No │ └───────┴───────────────────┴──────────────────────────────┴────────────────┘ Matchers can be joined using boolean and, or, and not operators. For example, the following string matches all Debian minions with a hostname that begins with webserv, as well as any minions that have a hostname which matches the regular expression web-dc1-srv.*: salt -C 'webserv* and G@os:Debian or E@web-dc1-srv.*' test.ping That same example expressed in a top file looks like the following: base: 'webserv* and G@os:Debian or E@web-dc1-srv.*': - match: compound - webserver New in version 2015.8.0. Excluding a minion based on its ID is also possible: salt -C 'not web-dc1-srv' test.ping Versions prior to 2015.8.0 a leading not was not supported in compound matches. Instead, something like the following was required: salt -C '* and not G@kernel:Darwin' test.ping Excluding a minion based on its ID was also possible: salt -C '* and not web-dc1-srv' test.ping Precedence Matching Matchers can be grouped together with parentheses to explicitly declare precedence amongst groups. salt -C '( ms-1 or G@id:ms-3 ) and G@id:ms-3' test.ping NOTE: Be certain to note that spaces are required between the parentheses and targets. Fail‐ ing to obey this rule may result in incorrect targeting! Alternate Delimiters New in version 2015.8.0. Matchers that target based on a key value pair use a colon (:) as a delimiter. Matchers with a Yes in the Alt Delimiters column in the previous table support specifying an alter‐ nate delimiter character. This is done by specifying an alternate delimiter character between the leading matcher character and the @ pattern separator character. This avoids incorrect interpretation of the pattern in the case that : is part of the grain or pillar data structure traversal. salt -C 'J|@foo|bar|^foo:bar$ or J!@gitrepo!https://github.com:example/project.git' test.p ↲ ing Node groups Nodegroups are declared using a compound target specification. The compound target docu‐ mentation can be found here. The nodegroups master config file parameter is used to define nodegroups. Here's an exam‐ ple nodegroup configuration within /etc/salt/master: nodegroups: group1: '@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com' group2: 'G@os:Debian and foo.domain.com' group3: 'G@os:Debian and N@group1' group4: - 'G@foo:bar' - 'or' - 'G@foo:baz' NOTE: The L within group1 is matching a list of minions, while the G in group2 is matching specific grains. See the compound matchers documentation for more details. New in version 2015.8.0. NOTE: Nodgroups can reference other nodegroups as seen in group3. Ensure that you do not have circular references. Circular references will be detected and cause partial expansion with a logged error message. New in version 2015.8.0. Compound nodegroups can be either string values or lists of string values. When the node‐ group is A string value will be tokenized by splitting on whitespace. This may be a prob‐ lem if whitespace is necessary as part of a pattern. When a nodegroup is a list of strings then tokenization will happen for each list element as a whole. To match a nodegroup on the CLI, use the -N command-line option: salt -N group1 test.ping NOTE: The N@ classifier cannot be used in compound mathes within the CLI or top file, it is only recognized in the nodegroups master config file parameter. To match a nodegroup in your top file, make sure to put - match: nodegroup on the line directly following the nodegroup name. base: group1: - match: nodegroup - webserver NOTE: When adding or modifying nodegroups to a master configuration file, the master must be restarted for those changes to be fully recognized. A limited amount of functionality, such as targeting with -N from the command-line may be available without a restart. Using Nodegroups in SLS files To use Nodegroups in Jinja logic for SLS files, the pillar_opts option in /etc/salt/master must be set to "True". This will pass the master's configuration as Pillar data to each minion. NOTE: If the master's configuration contains any sensitive data, this will be passed to each minion. Do not enable this option if you have any configuration data that you do not want to get on your minions. Also, if you make changes to your nodegroups, you might need to run salt '*' saltutil.refresh_pillar after restarting the master. Once pillar_opts is enabled, you can find the nodegroups under the "master" pillar. To make sure that only the correct minions are targeted, you should use each matcher for the nodegroup definition. For example, to check if a minion is in the 'webserver' nodegroup: nodegroups: webserver: 'G@os:Debian and L@minion1,minion2' {% if grains.id in salt['pillar.get']('master:nodegroups:webserver', []) and grains.os in salt['pillar.get']('master:nodegroups:webserver', []) %} ... {% endif %} NOTE: If you do not include all of the matchers used to define a nodegroup, Salt might incor‐ rectly target minions that meet some of the nodegroup requirements, but not all of them. Batch Size The -b (or --batch-size) option allows commands to be executed on only a specified number of minions at a time. Both percentages and finite numbers are supported. salt '*' -b 10 test.ping salt -G 'os:RedHat' --batch-size 25% apache.signal restart This will only run test.ping on 10 of the targeted minions at a time and then restart apache on 25% of the minions matching os:RedHat at a time and work through them all until the task is complete. This makes jobs like rolling web server restarts behind a load bal‐ ancer or doing maintenance on BSD firewalls using carp much easier with salt. The batch system maintains a window of running minions, so, if there are a total of 150 minions targeted and the batch size is 10, then the command is sent to 10 minions, when one minion returns then the command is sent to one additional minion, so that the job is constantly running on 10 minions. SECO Range SECO range is a cluster-based metadata store developed and maintained by Yahoo! The Range project is hosted here: https://github.com/ytoolshed/range Learn more about range here: https://github.com/ytoolshed/range/wiki/ Prerequisites To utilize range support in Salt, a range server is required. Setting up a range server is outside the scope of this document. Apache modules are included in the range distribution. With a working range server, cluster files must be defined. These files are written in YAML and define hosts contained inside a cluster. Full documentation on writing YAML range files is here: https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec Additionally, the Python seco range libraries must be installed on the salt master. One can verify that they have been installed correctly via the following command: python -c 'import seco.range' If no errors are returned, range is installed successfully on the salt master. Preparing Salt Range support must be enabled on the salt master by setting the hostname and port of the range server inside the master configuration file: range_server: my.range.server.com:80 Following this, the master must be restarted for the change to have an effect. Targeting with Range Once a cluster has been defined, it can be targeted with a salt command by using the -R or --range flags. For example, given the following range YAML file being served from a range server: $ cat /etc/range/test.yaml CLUSTER: host1..100.test.com APPS: - frontend - backend - mysql One might target host1 through host100 in the test.com domain with Salt as follows: salt --range %test:CLUSTER test.ping The following salt command would target three hosts: frontend, backend, and mysql: salt --range %test:APPS test.ping
STORING STATIC DATA IN THE PILLAR Pillar is an interface for Salt designed to offer global values that can be distributed to all minions. Pillar data is managed in a similar way as the Salt State Tree. Pillar was added to Salt in version 0.9.8 NOTE: Storing sensitive data Unlike state tree, pillar data is only available for the targeted minion specified by the matcher type. This makes it useful for storing sensitive data specific to a par‐ ticular minion. Declaring the Master Pillar The Salt Master server maintains a pillar_roots setup that matches the structure of the file_roots used in the Salt file server. Like the Salt file server the pillar_roots option in the master config is based on environments mapping to directories. The pillar data is then mapped to minions based on matchers in a top file which is laid out in the same way as the state top file. Salt pillars can use the same matcher types as the standard top file. The configuration for the pillar_roots in the master config file is identical in behavior and function as file_roots: pillar_roots: base: - /srv/pillar This example configuration declares that the base environment will be located in the /srv/pillar directory. It must not be in a subdirectory of the state tree. The top file used matches the name of the top file used for States, and has the same structure: /srv/pillar/top.sls base: '*': - packages In the above top file, it is declared that in the base environment, the glob matching all minions will have the pillar data found in the packages pillar available to it. Assuming the pillar_roots value of /srv/pillar taken from above, the packages pillar would be located at /srv/pillar/packages.sls. Any number of matchers can be added to the base environment. For example, here is an expanded version of the Pillar top file stated above: /srv/pillar/top.sls: base: '*': - packages 'web*': - vim In this expanded top file, minions that match web* will have access to the /srv/pil‐ lar/pacakges.sls file, as well as the /srv/pillar/vim.sls file. Another example shows how to use other standard top matching types to deliver specific salt pillar data to minions with different properties. Here is an example using the grains matcher to target pillars to minions by their os grain: dev: 'os:Debian': - match: grain - servers /srv/pillar/packages.sls {% if grains['os'] == 'RedHat' %} apache: httpd git: git {% elif grains['os'] == 'Debian' %} apache: apache2 git: git-core {% endif %} company: Foo Industries IMPORTANT: See Is Targeting using Grain Data Secure? for important security information. The above pillar sets two key/value pairs. If a minion is running RedHat, then the apache key is set to httpd and the git key is set to the value of git. If the minion is running Debian, those values are changed to apache2 and git-core respectively. All minions that have this pillar targeting to them via a top file will have the key of company with a value of Foo Industries. Consequently this data can be used from within modules, renderers, State SLS files, and more via the shared pillar dict: apache: pkg.installed: - name: {{ pillar['apache'] }} git: pkg.installed: - name: {{ pillar['git'] }} Finally, the above states can utilize the values provided to them via Pillar. All pillar values targeted to a minion are available via the 'pillar' dictionary. As seen in the above example, Jinja substitution can then be utilized to access the keys and values in the Pillar dictionary. Note that you cannot just list key/value-information in top.sls. Instead, target a minion to a pillar file and then list the keys and values in the pillar. Here is an example top file that illustrates this point: base: '*': - common_pillar And the actual pillar file at '/srv/pillar/common_pillar.sls': foo: bar boo: baz Pillar namespace flattened The separate pillar files all share the same namespace. Given a top.sls of: base: '*': - packages - services a packages.sls file of: bind: bind9 and a services.sls file of: bind: named Then a request for the bind pillar will only return named; the bind9 value is not avail‐ able. It is better to structure your pillar files with more hierarchy. For example your package.sls file could look like: packages: bind: bind9 Pillar Namespace Merges With some care, the pillar namespace can merge content from multiple pillar files under a single key, so long as conflicts are avoided as described above. For example, if the above example were modified as follows, the values are merged below a single key: base: '*': - packages - services And a packages.sls file like: bind: package-name: bind9 version: 9.9.5 And a services.sls file like: bind: port: 53 listen-on: any The resulting pillar will be as follows: $ salt-call pillar.get bind local: ---------- listen-on: any package-name: bind9 port: 53 version: 9.9.5 NOTE: Pillar files are applied in the order they are listed in the top file. Therefore con‐ flicting keys will be overwritten in a 'last one wins' manner! For example, in the above scenario conflicting key values in services will overwrite those in packages because it's at the bottom of the list. Including Other Pillars New in version 0.16.0. Pillar SLS files may include other pillar files, similar to State files. Two syntaxes are available for this purpose. The simple form simply includes the additional pillar as if it were part of the same file: include: - users The full include form allows two additional options -- passing default values to the tem‐ plating engine for the included pillar file as well as an optional key under which to nest the results of the included pillar: include: - users: defaults: sudo: ['bob', 'paul'] key: users With this form, the included file (users.sls) will be nested within the 'users' key of the compiled pillar. Additionally, the 'sudo' value will be available as a template variable to users.sls. Viewing Minion Pillar Once the pillar is set up the data can be viewed on the minion via the pillar module, the pillar module comes with functions, pillar.items and pillar.raw. pillar.items will return a freshly reloaded pillar and pillar.raw will return the current pillar without a refresh: salt '*' pillar.items NOTE: Prior to version 0.16.2, this function is named pillar.data. This function name is still supported for backwards compatibility. Pillar get Function New in version 0.14.0. The pillar.get function works much in the same way as the get method in a python dict, but with an enhancement: nested dict components can be extracted using a : delimiter. If a structure like this is in pillar: foo: bar: baz: qux Extracting it from the raw pillar in an sls formula or file template is done this way: {{ pillar['foo']['bar']['baz'] }} Now, with the new pillar.get function the data can be safely gathered and a default can be set, allowing the template to fall back if the value is not available: {{ salt['pillar.get']('foo:bar:baz', 'qux') }} This makes handling nested structures much easier. NOTE: pillar.get() vs salt['pillar.get']() It should be noted that within templating, the pillar variable is just a dictionary. This means that calling pillar.get() inside of a template will just use the default dictionary .get() function which does not include the extra : delimiter functionality. It must be called using the above syntax (salt['pillar.get']('foo:bar:baz', 'qux')) to get the salt function, instead of the default dictionary behavior. Refreshing Pillar Data When pillar data is changed on the master the minions need to refresh the data locally. This is done with the saltutil.refresh_pillar function. salt '*' saltutil.refresh_pillar This function triggers the minion to asynchronously refresh the pillar and will always return None. Set Pillar Data at the Command Line Pillar data can be set at the command line like the following example: salt '*' state.highstate pillar='{"cheese": "spam"}' This will create a dict with a key of 'cheese' and a value of 'spam'. A list can be cre‐ ated like this: salt '*' state.highstate pillar='["cheese", "milk", "bread"]' NOTE: Be aware that when sending sensitive data via pillar on the command-line that the pub‐ lication containing that data will be received by all minions and will not be restricted to the targeted minions. This may represent a security concern in some cases. Master Config In Pillar For convenience the data stored in the master configuration file can be made available in all minion's pillars. This makes global configuration of services and systems very easy but may not be desired if sensitive data is stored in the master configuration. This option is disabled by default. To enable the master config from being added to the pillar set pillar_opts to True: pillar_opts: True Minion Config in Pillar Minion configuration options can be set on pillars. Any option that you want to modify, should be in the first level of the pillars, in the same way you set the options in the config file. For example, to configure the MySQL root password to be used by MySQL Salt execution module, set the following pillar variable: mysql.pass: hardtoguesspassword Master Provided Pillar Error By default if there is an error rendering a pillar, the detailed error is hidden and replaced with: Rendering SLS 'my.sls' failed. Please see master log for details. The error is protected because it's possible to contain templating data which would give that minion information it shouldn't know, like a password! To have the master provide the detailed error that could potentially carry protected data set pillar_safe_render_error to False: pillar_safe_render_error: False
REACTOR SYSTEM Salt version 0.11.0 introduced the reactor system. The premise behind the reactor system is that with Salt's events and the ability to execute commands, a logic engine could be put in place to allow events to trigger actions, or more accurately, reactions. This system binds sls files to event tags on the master. These sls files then define reac‐ tions. This means that the reactor system has two parts. First, the reactor option needs to be set in the master configuration file. The reactor option allows for event tags to be associated with sls reaction files. Second, these reaction files use highdata (like the state system) to define reactions to be executed. Event System A basic understanding of the event system is required to understand reactors. The event system is a local ZeroMQ PUB interface which fires salt events. This event bus is an open system used for sending information notifying Salt and other systems about operations. The event system fires events with a very specific criteria. Every event has a tag. Event tags allow for fast top level filtering of events. In addition to the tag, each event has a data structure. This data structure is a dict, which contains information about the event. Mapping Events to Reactor SLS Files Reactor SLS files and event tags are associated in the master config file. By default this is /etc/salt/master, or /etc/salt/master.d/reactor.conf. New in version 2014.7.0: Added Reactor support for salt:// file paths. In the master config section 'reactor:' is a list of event tags to be matched and each event tag has a list of reactor SLS files to be run. reactor: # Master config section "reactor" - 'salt/minion/*/start': # Match tag "salt/minion/*/start" - /srv/reactor/start.sls # Things to do when a minion starts - /srv/reactor/monitor.sls # Other things to do - 'salt/cloud/*/destroyed': # Globs can be used to matching tags - /srv/reactor/destroy/*.sls # Globs can be used to match file names - 'myco/custom/event/tag': # React to custom event tags - salt://reactor/mycustom.sls # Put reactor files under file_roots Reactor sls files are similar to state and pillar sls files. They are by default yaml + Jinja templates and are passed familiar context variables. They differ because of the addition of the tag and data variables. · The tag variable is just the tag in the fired event. · The data variable is the event's data dict. Here is a simple reactor sls: {% if data['id'] == 'mysql1' %} highstate_run: local.state.highstate: - tgt: mysql1 {% endif %} This simple reactor file uses Jinja to further refine the reaction to be made. If the id in the event data is mysql1 (in other words, if the name of the minion is mysql1) then the following reaction is defined. The same data structure and compiler used for the state system is used for the reactor system. The only difference is that the data is matched up to the salt command API and the runner system. In this example, a command is published to the mysql1 minion with a function of state.highstate. Similarly, a runner can be called: {% if data['data']['orchestrate'] == 'refresh' %} orchestrate_run: runner.state.orchestrate {% endif %} This example will execute the state.orchestrate runner and initiate an orchestrate execu‐ tion. Fire an event To fire an event from a minion call event.send salt-call event.send 'foo' '{orchestrate: refresh}' After this is called, any reactor sls files matching event tag foo will execute with {{ data['data']['orchestrate'] }} equal to 'refresh'. See salt.modules.event for more information. Knowing what event is being fired The best way to see exactly what events are fired and what data is available in each event is to use the state.event runner. SEE ALSO: Common Salt Events Example usage: salt-run state.event pretty=True Example output: salt/job/20150213001905721678/new { "_stamp": "2015-02-13T00:19:05.724583", "arg": [], "fun": "test.ping", "jid": "20150213001905721678", "minions": [ "jerry" ], "tgt": "*", "tgt_type": "glob", "user": "root" } salt/job/20150213001910749506/ret/jerry { "_stamp": "2015-02-13T00:19:11.136730", "cmd": "_return", "fun": "saltutil.find_job", "fun_args": [ "20150213001905721678" ], "id": "jerry", "jid": "20150213001910749506", "retcode": 0, "return": {}, "success": true } Debugging the Reactor The best window into the Reactor is to run the master in the foreground with debug logging enabled. The output will include when the master sees the event, what the master does in response to that event, and it will also include the rendered SLS file (or any errors gen‐ erated while rendering the SLS file). 1. Stop the master. 2. Start the master manually: salt-master -l debug 3. Look for log entries in the form: [DEBUG ] Gathering reactors for tag foo/bar [DEBUG ] Compiling reactions for tag foo/bar [DEBUG ] Rendered data from file: /path/to/the/reactor_file.sls: <... Rendered output appears here. ...> The rendered output is the result of the Jinja parsing and is a good way to view the result of referencing Jinja variables. If the result is empty then Jinja produced an empty result and the Reactor will ignore it. Understanding the Structure of Reactor Formulas I.e., when to use `arg` and `kwarg` and when to specify the function arguments directly. While the reactor system uses the same basic data structure as the state system, the func‐ tions that will be called using that data structure are different functions than are called via Salt's state system. The Reactor can call Runner modules using the runner pre‐ fix, Wheel modules using the wheel prefix, and can also cause minions to run Execution modules using the local prefix. Changed in version 2014.7.0: The cmd prefix was renamed to local for consistency with other parts of Salt. A backward-compatible alias was added for cmd. The Reactor runs on the master and calls functions that exist on the master. In the case of Runner and Wheel functions the Reactor can just call those functions directly since they exist on the master and are run on the master. In the case of functions that exist on minions and are run on minions, the Reactor still needs to call a function on the master in order to send the necessary data to the minion so the minion can execute that function. The Reactor calls functions exposed in Salt's Python API documentation. and thus the structure of Reactor files very transparently reflects the function signatures of those functions. Calling Execution modules on Minions The Reactor sends commands down to minions in the exact same way Salt's CLI interface does. It calls a function locally on the master that sends the name of the function as well as a list of any arguments and a dictionary of any keyword arguments that the minion should use to execute that function. Specifically, the Reactor calls the async version of this function. You can see that func‐ tion has 'arg' and 'kwarg' parameters which are both values that are sent down to the min‐ ion. Executing remote commands maps to the LocalClient interface which is used by the salt com‐ mand. This interface more specifically maps to the cmd_async method inside of the Local‐ Client class. This means that the arguments passed are being passed to the cmd_async method, not the remote method. A field starts with local to use the LocalClient subsystem. The result is, to execute a remote command, a reactor formula would look like this: clean_tmp: local.cmd.run: - tgt: '*' - arg: - rm -rf /tmp/* The arg option takes a list of arguments as they would be presented on the command line, so the above declaration is the same as running this salt command: salt '*' cmd.run 'rm -rf /tmp/*' Use the expr_form argument to specify a matcher: clean_tmp: local.cmd.run: - tgt: 'os:Ubuntu' - expr_form: grain - arg: - rm -rf /tmp/* clean_tmp: local.cmd.run: - tgt: 'G@roles:hbase_master' - expr_form: compound - arg: - rm -rf /tmp/* Any other parameters in the LocalClient().cmd() method can be specified as well. Calling Runner modules and Wheel modules Calling Runner modules and Wheel modules from the Reactor uses a more direct syntax since the function is being executed locally instead of sending a command to a remote system to be executed there. There are no 'arg' or 'kwarg' parameters (unless the Runner function or Wheel function accepts a parameter with either of those names.) For example: clear_the_grains_cache_for_all_minions: runner.cache.clear_grains If the runner takes arguments then they can be specified as well: spin_up_more_web_machines: runner.cloud.profile: - prof: centos_6 - instances: - web11 # These VM names would be generated via Jinja in a - web12 # real-world example. Passing event data to Minions or Orchestrate as Pillar An interesting trick to pass data from the Reactor script to state.highstate or state.sls is to pass it as inline Pillar data since both functions take a keyword argument named pillar. The following example uses Salt's Reactor to listen for the event that is fired when the key for a new minion is accepted on the master using salt-key. /etc/salt/master.d/reactor.conf: reactor: - 'salt/key': - /srv/salt/haproxy/react_new_minion.sls The Reactor then fires a state.sls command targeted to the HAProxy servers and passes the ID of the new minion from the event to the state file via inline Pillar. /srv/salt/haproxy/react_new_minion.sls: {% if data['act'] == 'accept' and data['id'].startswith('web') %} add_new_minion_to_pool: local.state.sls: - tgt: 'haproxy*' - arg: - haproxy.refresh_pool - kwarg: pillar: new_minion: {{ data['id'] }} {% endif %} The above command is equivalent to the following command at the CLI: salt 'haproxy*' state.sls haproxy.refresh_pool 'pillar={new_minion: minionid}' This works with Orchestrate files as well: call_some_orchestrate_file: runner.state.orchestrate: - mods: some_orchestrate_file - pillar: stuff: things Which is equivalent to the following command at the CLI: salt-run state.orchestrate some_orchestrate_file pillar='{stuff: things}' Finally, that data is available in the state file using the normal Pillar lookup syntax. The following example is grabbing web server names and IP addresses from Salt Mine. If this state is invoked from the Reactor then the custom Pillar value from above will be available and the new minion will be added to the pool but with the disabled flag so that HAProxy won't yet direct traffic to it. /srv/salt/haproxy/refresh_pool.sls: {% set new_minion = salt['pillar.get']('new_minion') %} listen web *:80 balance source {% for server,ip in salt['mine.get']('web*', 'network.interfaces', ['eth0']).items() % ↲ } {% if server == new_minion %} server {{ server }} {{ ip }}:80 disabled {% else %} server {{ server }} {{ ip }}:80 check {% endif %} {% endfor %} A Complete Example In this example, we're going to assume that we have a group of servers that will come online at random and need to have keys automatically accepted. We'll also add that we don't want all servers being automatically accepted. For this example, we'll assume that all hosts that have an id that starts with 'ink' will be automatically accepted and have state.highstate executed. On top of this, we're going to add that a host coming up that was replaced (meaning a new key) will also be accepted. Our master configuration will be rather simple. All minions that attempte to authenticate will match the tag of salt/auth. When it comes to the minion key being accepted, we get a more refined tag that includes the minion id, which we can use for matching. /etc/salt/master.d/reactor.conf: reactor: - 'salt/auth': - /srv/reactor/auth-pending.sls - 'salt/minion/ink*/start': - /srv/reactor/auth-complete.sls In this sls file, we say that if the key was rejected we will delete the key on the master and then also tell the master to ssh in to the minion and tell it to restart the minion, since a minion process will die if the key is rejected. We also say that if the key is pending and the id starts with ink we will accept the key. A minion that is waiting on a pending key will retry authentication every ten seconds by default. /srv/reactor/auth-pending.sls: {# Ink server faild to authenticate -- remove accepted key #} {% if not data['result'] and data['id'].startswith('ink') %} minion_remove: wheel.key.delete: - match: {{ data['id'] }} minion_rejoin: local.cmd.run: - tgt: salt-master.domain.tld - arg: - ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "{{ data['id'] }}" ↲ 'sleep 10 && /etc/init.d/salt-minion restart' {% endif %} {# Ink server is sending new key -- accept this key #} {% if 'act' in data and data['act'] == 'pend' and data['id'].startswith('ink') %} minion_add: wheel.key.accept: - match: {{ data['id'] }} {% endif %} No if statements are needed here because we already limited this action to just Ink servers in the master configuration. /srv/reactor/auth-complete.sls: {# When an Ink server connects, run state.highstate. #} highstate_run: local.state.highstate: - tgt: {{ data['id'] }} - ret: smtp The above will also return the highstate result data using the smtp_return returner (use virtualname like when using from the command line with --return). The returner needs to be configured on the minion for this to work. See salt.returners.smtp_return documenta‐ tion for that. Syncing Custom Types on Minion Start Salt will sync all custom types (by running a saltutil.sync_all) on every highstate. How‐ ever, there is a chicken-and-egg issue where, on the initial highstate, a minion will not yet have these custom types synced when the top file is first compiled. This can be worked around with a simple reactor which watches for minion_start events, which each minion fires when it first starts up and connects to the master. On the master, create /srv/reactor/sync_grains.sls with the following contents: sync_grains: local.saltutil.sync_grains: - tgt: {{ data['id'] }} And in the master config file, add the following reactor configuration: reactor: - 'minion_start': - /srv/reactor/sync_grains.sls This will cause the master to instruct each minion to sync its custom grains when it starts, making these grains available when the initial highstate is executed. Other types can be synced by replacing local.saltutil.sync_grains with local.saltutil.sync_modules, local.saltutil.sync_all, or whatever else suits the intended use case.
THE SALT MINE The Salt Mine is used to collect arbitrary data from Minions and store it on the Master. This data is then made available to all Minions via the salt.modules.mine module. Mine data is gathered on the Minion and sent back to the Master where only the most recent data is maintained (if long term data is required use returners or the external job cache). Mine vs Grains Mine data is designed to be much more up-to-date than grain data. Grains are refreshed on a very limited basis and are largely static data. Mines are designed to replace slow peer publishing calls when Minions need data from other Minions. Rather than having a Minion reach out to all the other Minions for a piece of data, the Salt Mine, running on the Mas‐ ter, can collect it from all the Minions every mine-interval, resulting in almost fresh data at any given time, with much less overhead. Mine Functions To enable the Salt Mine the mine_functions option needs to be applied to a Minion. This option can be applied via the Minion's configuration file, or the Minion's Pillar. The mine_functions option dictates what functions are being executed and allows for arguments to be passed in. If no arguments are passed, an empty list must be added: mine_functions: test.ping: [] network.ip_addrs: interface: eth0 cidr: '10.0.0.0/8' Mine Functions Aliases Function aliases can be used to provide friendly names, usage intentions or to allow mul‐ tiple calls of the same function with different arguments. There is a different syntax for passing positional and key-value arguments. Mixing positional and key-value arguments is not supported. New in version 2014.7.0. mine_functions: network.ip_addrs: [eth0] networkplus.internal_ip_addrs: [] internal_ip_addrs: mine_function: network.ip_addrs cidr: 192.168.0.0/16 ip_list: - mine_function: grains.get - ip_interfaces Mine Interval The Salt Mine functions are executed when the Minion starts and at a given interval by the scheduler. The default interval is every 60 minutes and can be adjusted for the Minion via the mine_interval option: mine_interval: 60 Mine in Salt-SSH As of the 2015.5.0 release of salt, salt-ssh supports mine.get. Because the Minions cannot provide their own mine_functions configuration, we retrieve the args for specified mine functions in one of three places, searched in the following order: 1. Roster data 2. Pillar 3. Master config The mine_functions are formatted exactly the same as in normal salt, just stored in a dif‐ ferent location. Here is an example of a flat roster containing mine_functions: test: host: 104.237.131.248 user: root mine_functions: cmd.run: ['echo "hello!"'] network.ip_addrs: interface: eth0 NOTE: Because of the differences in the architecture of salt-ssh, mine.get calls are somewhat inefficient. Salt must make a new salt-ssh call to each of the Minions in question to retrieve the requested data, much like a publish call. However, unlike publish, it must run the requested function as a wrapper function, so we can retrieve the function args from the pillar of the Minion in question. This results in a non-trivial delay in retrieving the requested data. Example One way to use data from Salt Mine is in a State. The values can be retrieved via Jinja and used in the SLS file. The following example is a partial HAProxy configuration file and pulls IP addresses from all Minions with the "web" grain to add them to the pool of load balanced servers. /srv/pillar/top.sls: base: 'G@roles:web': - web /srv/pillar/web.sls: mine_functions: network.ip_addrs: [eth0] /etc/salt/minion.d/mine.conf: mine_interval: 5 /srv/salt/haproxy.sls: haproxy_config: file.managed: - name: /etc/haproxy/config - source: salt://haproxy_config - template: jinja /srv/salt/haproxy_config: <...file contents snipped...> {% for server, addrs in salt['mine.get']('roles:web', 'network.ip_addrs', expr_form='pilla ↲ r').items() %} server {{ server }} {{ addrs[0] }}:80 check {% endfor %} <...file contents snipped...>
EXTERNAL AUTHENTICATION SYSTEM Salt's External Authentication System (eAuth) allows for Salt to pass through command authorization to any external authentication system, such as PAM or LDAP. NOTE: eAuth using the PAM external auth system requires salt-master to be run as root as this system needs root access to check authentication. Access Control System NOTE: When to Use client_acl and external_auth client_acl is useful for allowing local system users to run Salt commands without giv‐ ing them root access. If you can log into the Salt master directly, then client_acl will allow you to use Salt without root privileges. If the local system is configured to authenticate against a remote system, like LDAP or Active Directory, then client_acl will interact with the remote system transparently. external_auth is useful for salt-api or for making your own scripts that use Salt's Python API. It can be used at the CLI (with the -a flag) but it is more cumbersome as there are more steps involved. The only time it is useful at the CLI is when the local system is not configured to authenticate against an external service but you still want Salt to authenticate against an external service. The external authentication system allows for specific users to be granted access to exe‐ cute specific functions on specific minions. Access is configured in the master configura‐ tion file and uses the access control system: external_auth: pam: thatch: - 'web*': - test.* - network.* steve: - .* The above configuration allows the user thatch to execute functions in the test and net‐ work modules on the minions that match the web* target. User steve is given unrestricted access to minion commands. Salt respects the current PAM configuration in place, and uses the 'login' service to authenticate. NOTE: The PAM module does not allow authenticating as root. To allow access to wheel modules or runner modules the following @ syntax must be used: external_auth: pam: thatch: - '@wheel' # to allow access to all wheel modules - '@runner' # to allow access to all runner modules - '@jobs' # to allow access to the jobs runner and/or wheel module NOTE: The runner/wheel markup is different, since there are no minions to scope the acl to. NOTE: Globs will not match wheel or runners! They must be explicitly allowed with @wheel or @runner. The external authentication system can then be used from the command-line by any user on the same system as the master with the -a option: $ salt -a pam web\* test.ping The system will ask the user for the credentials required by the authentication system and then publish the command. To apply permissions to a group of users in an external authentication system, append a % to the ID: external_auth: pam: admins%: - '*': - 'pkg.*' WARNING: All users that have external authentication privileges are allowed to run saltutil.findjob. Be aware that this could inadvertently expose some data such as min‐ ion IDs. Tokens With external authentication alone, the authentication credentials will be required with every call to Salt. This can be alleviated with Salt tokens. Tokens are short term authorizations and can be easily created by just adding a -T option when authenticating: $ salt -T -a pam web\* test.ping Now a token will be created that has a expiration of 12 hours (by default). This token is stored in a file named salt_token in the active user's home directory. Once the token is created, it is sent with all subsequent communications. User authenti‐ cation does not need to be entered again until the token expires. Token expiration time can be set in the Salt master config file. LDAP and Active Directory NOTE: LDAP usage requires that you have installed python-ldap. Salt supports both user and group authentication for LDAP (and Active Directory accessed via its LDAP interface) OpenLDAP and similar systems LDAP configuration happens in the Salt master configuration file. Server configuration values and their defaults: # Server to auth against auth.ldap.server: localhost # Port to connect via auth.ldap.port: 389 # Use TLS when connecting auth.ldap.tls: False # LDAP scope level, almost always 2 auth.ldap.scope: 2 # Server specified in URI format auth.ldap.uri: '' # Overrides .ldap.server, .ldap.port, .ldap.tls above # Verify server's TLS certificate auth.ldap.no_verify: False # Bind to LDAP anonymously to determine group membership # Active Directory does not allow anonymous binds without special configuration auth.ldap.anonymous: False # FOR TESTING ONLY, this is a VERY insecure setting. # If this is True, the LDAP bind password will be ignored and # access will be determined by group membership alone with # the group memberships being retrieved via anonymous bind auth.ldap.auth_by_group_membership_only: False # Require authenticating user to be part of this Organizational Unit # This can be blank if your LDAP schema does not use this kind of OU auth.ldap.groupou: 'Groups' # Object Class for groups. An LDAP search will be done to find all groups of this # class to which the authenticating user belongs. auth.ldap.groupclass: 'posixGroup' # Unique ID attribute name for the user auth.ldap.accountattributename: 'memberUid' # These are only for Active Directory auth.ldap.activedirectory: False auth.ldap.persontype: 'person' There are two phases to LDAP authentication. First, Salt authenticates to search for a users's Distinguished Name and group membership. The user it authenticates as in this phase is often a special LDAP system user with read-only access to the LDAP directory. After Salt searches the directory to determine the actual user's DN and groups, it re-authenticates as the user running the Salt commands. If you are already aware of the structure of your DNs and permissions in your LDAP store are set such that users can look up their own group memberships, then the first and second users can be the same. To tell Salt this is the case, omit the auth.ldap.bindpw parame‐ ter. You can template the binddn like this: auth.ldap.basedn: dc=saltstack,dc=com auth.ldap.binddn: uid={{ username }},cn=users,cn=accounts,dc=saltstack,dc=com Salt will use the password entered on the salt command line in place of the bindpw. To use two separate users, specify the LDAP lookup user in the binddn directive, and include a bindpw like so auth.ldap.binddn: uid=ldaplookup,cn=sysaccounts,cn=etc,dc=saltstack,dc=com auth.ldap.bindpw: mypassword As mentioned before, Salt uses a filter to find the DN associated with a user. Salt sub‐ stitutes the {{ username }} value for the username when querying LDAP auth.ldap.filter: uid={{ username }} For OpenLDAP, to determine group membership, one can specify an OU that contains group data. This is prepended to the basedn to create a search path. Then the results are fil‐ tered against auth.ldap.groupclass, default posixGroup, and the account's 'name' attribute, memberUid by default. auth.ldap.groupou: Groups Active Directory Active Directory handles group membership differently, and does not utilize the groupou configuration variable. AD needs the following options in the master config: auth.ldap.activedirectory: True auth.ldap.filter: sAMAccountName={{username}} auth.ldap.accountattributename: sAMAccountName auth.ldap.groupclass: group auth.ldap.persontype: person To determine group membership in AD, the username and password that is entered when LDAP is requested as the eAuth mechanism on the command line is used to bind to AD's LDAP interface. If this fails, then it doesn't matter what groups the user belongs to, he or she is denied access. Next, the distinguishedName of the user is looked up with the fol‐ lowing LDAP search: (&(<value of auth.ldap.accountattributename>={{username}}) (objectClass=<value of auth.ldap.persontype>) ) This should return a distinguishedName that we can use to filter for group membership. Then the following LDAP query is executed: (&(member=<distinguishedName from search above>) (objectClass=<value of auth.ldap.groupclass>) ) external_auth: ldap: test_ldap_user: - '*': - test.ping To configure an LDAP group, append a % to the ID: external_auth: ldap: test_ldap_group%: - '*': - test.echo
ACCESS CONTROL SYSTEM New in version 0.10.4. Salt maintains a standard system used to open granular control to non administrative users to execute Salt commands. The access control system has been applied to all systems used to configure access to non administrative control interfaces in Salt.These interfaces include, the peer system, the external auth system and the client acl system. The access control system mandated a standard configuration syntax used in all of the three aforementioned systems. While this adds functionality to the configuration in 0.10.4, it does not negate the old configuration. Now specific functions can be opened up to specific minions from specific users in the case of external auth and client ACLs, and for specific minions in the case of the peer system. The access controls are manifested using matchers in these configurations: client_acl: fred: - web\*: - pkg.list_pkgs - test.* - apache.* In the above example, fred is able to send commands only to minions which match the speci‐ fied glob target. This can be expanded to include other functions for other minions based on standard targets. external_auth: pam: dave: - test.ping - mongo\*: - network.* - log\*: - network.* - pkg.* - 'G@os:RedHat': - kmod.* steve: - .* The above allows for all minions to be hit by test.ping by dave, and adds a few functions that dave can execute on other minions. It also allows steve unrestricted access to salt commands. NOTE: Functions are matched using regular expressions.
JOB MANAGEMENT New in version 0.9.7. Since Salt executes jobs running on many systems, Salt needs to be able to manage jobs running on many systems. The Minion proc System Salt Minions maintain a proc directory in the Salt cachedir. The proc directory maintains files named after the executed job ID. These files contain the information about the cur‐ rent running jobs on the minion and allow for jobs to be looked up. This is located in the proc directory under the cachedir, with a default configuration it is under /var/cache/salt/proc. Functions in the saltutil Module Salt 0.9.7 introduced a few new functions to the saltutil module for managing jobs. These functions are: 1. running Returns the data of all running jobs that are found in the proc directory. 2. find_job Returns specific data about a certain job based on job id. 3. signal_job Allows for a given jid to be sent a signal. 4. term_job Sends a termination signal (SIGTERM, 15) to the process controlling the speci‐ fied job. 5. kill_job Sends a kill signal (SIGKILL, 9) to the process controlling the specified job. These functions make up the core of the back end used to manage jobs at the minion level. The jobs Runner A convenience runner front end and reporting system has been added as well. The jobs run‐ ner contains functions to make viewing data easier and cleaner. The jobs runner contains a number of functions... active The active function runs saltutil.running on all minions and formats the return data about all running jobs in a much more usable and compact format. The active function will also compare jobs that have returned and jobs that are still running, making it easier to see what systems have completed a job and what systems are still being waited on. # salt-run jobs.active lookup_jid When jobs are executed the return data is sent back to the master and cached. By default it is cached for 24 hours, but this can be configured via the keep_jobs option in the mas‐ ter configuration. Using the lookup_jid runner will display the same return data that the initial job invocation with the salt command would display. # salt-run jobs.lookup_jid <job id number> list_jobs Before finding a historic job, it may be required to find the job id. list_jobs will parse the cached execution data and display all of the job data for jobs that have already, or partially returned. # salt-run jobs.list_jobs Scheduling Jobs In Salt versions greater than 0.12.0, the scheduling system allows incremental executions on minions or the master. The schedule system exposes the execution of any execution func‐ tion on minions or any runner on the master. Scheduling is enabled via the schedule option on either the master or minion config files, or via a minion's pillar data. Schedules that are impletemented via pillar data, only need to refresh the minion's pillar data, for example by using saltutil.refresh_pillar. Sched‐ ules implemented in the master or minion config have to restart the application in order for the schedule to be implemented. NOTE: The scheduler executes different functions on the master and minions. When running on the master the functions reference runner functions, when running on the minion the functions specify execution functions. A scheduled run has no output on the minion unless the config is set to info level or higher. Refer to minion logging settings. Specify maxrunning to ensure that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or oth‐ erwise double execute. The default for maxrunning is 1. States are executed on the minion, as all states are. You can pass positional arguments and provide a yaml dict of named arguments. schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True splay: 15 This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 0 and 15 seconds schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True splay: start: 10 end: 15 This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 10 and 15 seconds New in version 2014.7.0. Frequency of jobs can also be specified using date strings supported by the python dateu‐ til library. This requires python-dateutil to be installed on the minion. schedule: job1: function: state.sls args: - httpd kwargs: test: True when: 5:00pm This will schedule the command: state.sls httpd test=True at 5:00pm minion localtime. schedule: job1: function: state.sls args: - httpd kwargs: test: True when: - Monday 5:00pm - Tuesday 3:00pm - Wednesday 5:00pm - Thursday 3:00pm - Friday 5:00pm This will schedule the command: state.sls httpd test=True at 5pm on Monday, Wednesday, and Friday, and 3pm on Tuesday and Thursday. schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True range: start: 8:00am end: 5:00pm This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) between the hours of 8am and 5pm. The range parameter must be a dictionary with the date strings using the dateutil format. This requires python-dateutil to be installed on the minion. New in version 2014.7.0. The scheduler also supports ensuring that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or pile up in case of infrastructure outage. The default for maxrunning is 1. schedule: long_running_job: function: big_file_transfer jid_include: True States schedule: log-loadavg: function: cmd.run seconds: 3660 args: - 'logger -t salt < /proc/loadavg' kwargs: stateful: False shell: /bin/sh Highstates To set up a highstate to run on a minion every 60 minutes set this in the minion config or pillar: schedule: highstate: function: state.highstate minutes: 60 Time intervals can be specified as seconds, minutes, hours, or days. Runners Runner executions can also be specified on the master within the master configuration file: schedule: run_my_orch: function: state.orchestrate hours: 6 splay: 600 args: - orchestration.my_orch The above configuration is analogous to running salt-run state.orch orchestration.my_orch every 6 hours. Scheduler With Returner The scheduler is also useful for tasks like gathering monitoring data about a minion, this schedule option will gather status data and send it to a MySQL returner database: schedule: uptime: function: status.uptime seconds: 60 returner: mysql meminfo: function: status.meminfo minutes: 5 returner: mysql Since specifying the returner repeatedly can be tiresome, the schedule_returner option is available to specify one or a list of global returners to be used by the minions when scheduling. In Salt versions greater than 0.12.0, the scheduling system allows incremen‐ tal executions on minions or the master. The schedule system exposes the execution of any execution function on minions or any runner on the master. Scheduling is enabled via the schedule option on either the master or minion config files, or via a minion's pillar data. Schedules that are impletemented via pillar data, only need to refresh the minion's pillar data, for example by using saltutil.refresh_pillar. Sched‐ ules implemented in the master or minion config have to restart the application in order for the schedule to be implemented. NOTE: The scheduler executes different functions on the master and minions. When running on the master the functions reference runner functions, when running on the minion the functions specify execution functions. A scheduled run has no output on the minion unless the config is set to info level or higher. Refer to minion logging settings. Specify maxrunning to ensure that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or oth‐ erwise double execute. The default for maxrunning is 1. States are executed on the minion, as all states are. You can pass positional arguments and provide a yaml dict of named arguments. schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True splay: 15 This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 0 and 15 seconds schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True splay: start: 10 end: 15 This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 10 and 15 seconds New in version 2014.7.0. Frequency of jobs can also be specified using date strings supported by the python dateu‐ til library. This requires python-dateutil to be installed on the minion. schedule: job1: function: state.sls args: - httpd kwargs: test: True when: 5:00pm This will schedule the command: state.sls httpd test=True at 5:00pm minion localtime. schedule: job1: function: state.sls args: - httpd kwargs: test: True when: - Monday 5:00pm - Tuesday 3:00pm - Wednesday 5:00pm - Thursday 3:00pm - Friday 5:00pm This will schedule the command: state.sls httpd test=True at 5pm on Monday, Wednesday, and Friday, and 3pm on Tuesday and Thursday. schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True range: start: 8:00am end: 5:00pm This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) between the hours of 8am and 5pm. The range parameter must be a dictionary with the date strings using the dateutil format. This requires python-dateutil to be installed on the minion. New in version 2014.7.0. The scheduler also supports ensuring that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or pile up in case of infrastructure outage. The default for maxrunning is 1. schedule: long_running_job: function: big_file_transfer jid_include: True States schedule: log-loadavg: function: cmd.run seconds: 3660 args: - 'logger -t salt < /proc/loadavg' kwargs: stateful: False shell: /bin/sh Highstates To set up a highstate to run on a minion every 60 minutes set this in the minion config or pillar: schedule: highstate: function: state.highstate minutes: 60 Time intervals can be specified as seconds, minutes, hours, or days. Runners Runner executions can also be specified on the master within the master configuration file: schedule: run_my_orch: function: state.orchestrate hours: 6 splay: 600 args: - orchestration.my_orch The above configuration is analogous to running salt-run state.orch orchestration.my_orch every 6 hours. Scheduler With Returner The scheduler is also useful for tasks like gathering monitoring data about a minion, this schedule option will gather status data and send it to a MySQL returner database: schedule: uptime: function: status.uptime seconds: 60 returner: mysql meminfo: function: status.meminfo minutes: 5 returner: mysql Since specifying the returner repeatedly can be tiresome, the schedule_returner option is available to specify one or a list of global returners to be used by the minions when scheduling.
MANAGING THE JOB CACHE The Salt Master maintains a job cache of all job executions which can be queried via the jobs runner. This job cache is called the Default Job Cache. Default Job Cache A number of options are available when configuring the job cache. The default caching sys‐ tem uses local storage on the Salt Master and can be found in the job cache directory (on Linux systems this is typically /var/cache/salt/master/jobs). The default caching system is suitable for most deployments as it does not typically require any further configura‐ tion or management. The default job cache is a temporary cache and jobs will be stored for 24 hours. If the default cache needs to store jobs for a different period the time can be easily adjusted by changing the keep_jobs parameter in the Salt Master configuration file. The value passed in is measured via hours: keep_jobs: 24 Additional Job Cache Options Many deployments may wish to use an external database to maintain a long term register of executed jobs. Salt comes with two main mechanisms to do this, the master job cache and the external job cache. See Storing Job Results in an External System.
STORING JOB RESULTS IN AN EXTERNAL SYSTEM After a job executes, job results are returned to the Salt Master by each Salt Minion. These results are stored in the Default Job Cache. In addition to the Default Job Cache, Salt provides two additional mechanisms to send job results to other systems (databases, local syslog, and others): · External Job Cache · Master Job Cache The major difference between these two mechanism is from where results are returned (from the Salt Master or Salt Minion). External Job Cache - Minion-Side Returner When an External Job Cache is configured, data is returned to the Default Job Cache on the Salt Master like usual, and then results are also sent to an External Job Cache using a Salt returner module running on the Salt Minion. [image] · Advantages: Data is stored without placing additional load on the Salt Master. · Disadvantages: Each Salt Minion connects to the external job cache, which can result in a large number of connections. Also requires additional configuration to get returner module settings on all Salt Minions. Master Job Cache - Master-Side Returner New in version 2014.7.0. Instead of configuring an External Job Cache on each Salt Minion, you can configure the Master Job Cache to send job results from the Salt Master instead. In this configuration, Salt Minions send data to the Default Job Cache as usual, and then the Salt Master sends the data to the external system using a Salt returner module running on the Salt Master. [image] · Advantages: A single connection is required to the external system. This is preferred for databases and similar systems. · Disadvantages: Places additional load on your Salt Master. Configure an External or Master Job Cache Step 1: Understand Salt Returners Before you configure a job cache, it is essential to understand Salt returner modules ("returners"). Returners are pluggable Salt Modules that take the data returned by jobs, and then perform any necessary steps to send the data to an external system. For example, a returner might establish a connection, authenticate, and then format and transfer data. The Salt Returner system provides the core functionality used by the External and Master Job Cache systems, and the same returners are used by both systems. Salt currently provides many different returners that let you connect to a wide variety of systems. A complete list is available at all Salt returners. Each returner is configured differently, so make sure you read and follow the instructions linked from that page. For example, the MySQL returner requires: · A database created using provided schema (structure is available at MySQL returner) · A user created with with privileges to the database · Optional SSL configuration A simpler returner, such as Slack or HipChat, requires: · An API key/version · The target channel/room · The username that should be used to send the message Step 2: Configure the Returner After you understand the configuration and have the external system ready, add the returner configuration settings to the Salt Minion configuration file for the External Job Cache, or to the Salt Master configuration file for the Master Job Cache. For example, MySQL requires: mysql.host: 'salt' mysql.user: 'salt' mysql.pass: 'salt' mysql.db: 'salt' mysql.port: 3306 Slack requires: slack.channel: 'channel' slack.api_key: 'key' slack.from_name: 'name' After you have configured the returner and added settings to the configuration file, you can enable the External or Master Job Cache. Step 3: Enable the External or Master Job Cache Configuration is a single line that specifies an already-configured returner to use to send all job data to an external system. External Job Cache To enable a returner as the External Job Cache (Minion-side), add the following line to the Salt Master configuration file: ext_job_cache: <returner> For example: ext_job_cache: mysql NOTE: When configuring an External Job Cache (Minion-side), the returner settings are added to the Minion configuration file, but the External Job Cache setting is configured in the Master configuration file. Master Job Cache To enable a returner as a Master Job Cache (Master-side), add the following line to the Salt Master configuration file: master_job_cache: <returner> For example: master_job_cache: mysql Verify that the returner configuration settings are in the Master configuration file, and be sure to restart the salt-master service after you make configuration changes. (service salt-master restart).
STORING DATA IN OTHER DATABASES The SDB interface is designed to store and retrieve data that, unlike pillars and grains, is not necessarily minion-specific. The initial design goal was to allow passwords to be stored in a secure database, such as one managed by the keyring package, rather than as plain-text files. However, as a generic database interface, it could conceptually be used for a number of other purposes. SDB was added to Salt in version 2014.7.0. SDB is currently experimental, and should prob‐ ably not be used in production. SDB Configuration In order to use the SDB interface, a configuration profile must be set up in either the master or minion configuration file. The configuration stanza includes the name/ID that the profile will be referred to as, a driver setting, and any other arguments that are necessary for the SDB module that will be used. For instance, a profile called mykeyring, which uses the system service in the keyring module would look like: mykeyring: driver: keyring service: system It is recommended to keep the name of the profile simple, as it is used in the SDB URI as well. SDB URIs SDB is designed to make small database queries (hence the name, SDB) using a compact URL. This allows users to reference a database value quickly inside a number of Salt configura‐ tion areas, without a lot of overhead. The basic format of an SDB URI is: sdb://<profile>/<args> The profile refers to the configuration profile defined in either the master or the minion configuration file. The args are specific to the module referred to in the profile, but will typically only need to refer to the key of a key/value pair inside the database. This is because the profile itself should define as many other parameters as possible. For example, a profile might be set up to reference credentials for a specific OpenStack account. The profile might look like: kevinopenstack: driver: keyring service: salt.cloud.openstack.kevin And the URI used to reference the password might look like: sdb://kevinopenstack/password Getting and Setting SDB Values Once an SDB driver is configured, you can use the sdb execution module to set and get val‐ ues from it. There are two functions that will appear in any SDB module: set and get. Getting a value requires only the SDB URI to be specified. To retreive a value from the kevinopenstack profile above, you would use: salt-call sdb.get sdb://kevinopenstack/password Some drivers use slightly more complex URIs. For instance, the vault driver requires the full path to where the key is stored, followed by a question mark, followed by the key to be retrieved. If you were using a profile called myvault, you would use a URI that looks like: salt-call sdb.get 'sdb://myvault/secret/salt?saltstack' Setting a value uses the same URI as would be used to retrieve it, followed by the value as another argument. For the above myvault URI, you would set a new value using a command like: salt-call sdb.set 'sdb://myvault/secret/salt?saltstack' 'super awesome' The sdb.get and sdb.set functions are also available in the runner system: salt-run sdb.get 'sdb://myvault/secret/salt?saltstack' salt-run sdb.set 'sdb://myvault/secret/salt?saltstack' 'super awesome' Using SDB URIs in Files SDB URIs can be used in both configuration files, and files that are processed by the ren‐ derer system (jinja, mako, etc.). In a configuration file (such as /etc/salt/master, /etc/salt/minion, /etc/salt/cloud, etc.), make an entry as usual, and set the value to the SDB URI. For instance: mykey: sdb://myetcd/mykey To retrieve this value using a module, the module in question must use the config.get function to retrive configuration values. This would look something like: mykey = __salt__['config.get']('mykey') Templating renderers use a similar construct. To get the mykey value from above in Jinja, you would use: {{ salt['config.get']('mykey') }} When retrieving data from configuration files using config.get, the SDB URI need only appear in the configuration file itself. If you would like to retrieve a key directly from SDB, you would call the sdb.get function directly, using the SDB URI. For instance, in Jinja: {{ salt['sdb.get']('sdb://myetcd/mykey') }} When writing Salt modules, it is not recommended to call sdb.get directly, as it requires the user to provide vaules in SDB, using a specific URI. Use config.get instead. Writing SDB Modules There is currently one function that MUST exist in any SDB module (get()) and one that SHOULD exist (set_()). If using a (set_()) function, a __func_alias__ dictionary MUST be declared in the module as well: __func_alias__ = { 'set_': 'set', } This is because set is a Python built-in, and therefore functions should not be created which are called set(). The __func_alias__ functionality is provided via Salt's loader interfaces, and allows legally-named functions to be referred to using names that would otherwise be unwise to use. The get() function is required, as it will be called via functions in other areas of the code which make use of the sdb:// URI. For example, the config.get function in the config execution module uses this function. The set_() function may be provided, but is not required, as some sources may be read-only, or may be otherwise unwise to access via a URI (for instance, because of SQL injection attacks). A simple example of an SDB module is salt/sdb/keyring_db.py, as it provides basic examples of most, if not all, of the types of functionality that are available not only for SDB modules, but for Salt modules in general.
SALT EVENT SYSTEM The Salt Event System is used to fire off events enabling third party applications or external processes to react to behavior within Salt. The event system is comprised of a two primary components: · The event sockets which publishes events. · The event library which can listen to events and send events into the salt system. Event types Salt Master Events These events are fired on the Salt Master event bus. This list is not comprehensive. Authentication events salt/auth Fired when a minion performs an authentication check with the master. Variables · id -- The minion ID. · act -- The current status of the minion key: accept, pend, reject. · pub -- The minion public key. NOTE: Minions fire auth events on fairly regular basis for a number of reasons. Writ‐ ing reactors to respond to events through the auth cycle can lead to infinite reactor event loops (minion tries to auth, reactor responds by doing something that generates another auth event, minion sends auth event, etc.). Consider reacting to salt/key or salt/minion/<MID>/start or firing a custom event tag instead. Start events salt/minion/<MID>/start Fired every time a minion connects to the Salt master. Variables id -- The minion ID. Key events salt/key Fired when accepting and rejecting minions keys on the Salt master. Variables · id -- The minion ID. · act -- The new status of the minion key: accept, pend, reject. WARNING: If a master is in auto_accept mode, salt/key events will not be fired when the keys are accepted. In addition, pre-seeding keys (like happens through Salt-Cloud) will not cause firing of these events. Job events salt/job/<JID>/new Fired as a new job is sent out to minions. Variables · jid -- The job ID. · tgt -- The target of the job: *, a minion ID, G@os_family:RedHat, etc. · tgt_type -- The type of targeting used: glob, grain, compound, etc. · fun -- The function to run on minions: test.ping, network.interfaces, etc. · arg -- A list of arguments to pass to the function that will be called. · minions -- A list of minion IDs that Salt expects will return data for this job. · user -- The name of the user that ran the command as defined in Salt's Client ACL or external auth. salt/job/<JID>/ret/<MID> Fired each time a minion returns data for a job. Variables · id -- The minion ID. · jid -- The job ID. · retcode -- The return code for the job. · fun -- The function the minion ran. E.g., test.ping. · return -- The data returned from the execution module. salt/job/<JID>/prog/<MID>/<RUN NUM> Fired each time a each function in a state run completes execution. Must be enabled using the state_events option. Variables · data -- The data returned from the state module function. · id -- The minion ID. · jid -- The job ID. Presence events salt/presence/present Events fired on a regular interval about currently connected, newly connected, or recently disconnected minions. Requires the presence_events setting to be enabled. Variables present -- A list of minions that are currently connected to the Salt mas‐ ter. salt/presence/change Fired when the Presence system detects new minions connect or disconnect. Variables · new -- A list of minions that have connected since the last presence event. · lost -- A list of minions that have disconnected since the last presence event. Cloud Events Unlike other Master events, salt-cloud events are not fired on behalf of a Salt Minion. Instead, salt-cloud events are fired on behalf of a VM. This is because the minion-to-be may not yet exist to fire events to or also may have been destroyed. This behavior is reflected by the name variable in the event data for salt-cloud events as compared to the id variable for Salt Minion-triggered events. salt/cloud/<VM NAME>/creating Fired when salt-cloud starts the VM creation process. Variables · name -- the name of the VM being created. · event -- description of the event. · provider -- the cloud provider of the VM being created. · profile -- the cloud profile for the VM being created. salt/cloud/<VM NAME>/deploying Fired when the VM is available and salt-cloud begins deploying Salt to the new VM. Variables · name -- the name of the VM being created. · event -- description of the event. · kwargs -- options available as the deploy script is invoked: conf_file, deploy_command, display_ssh_output, host, keep_tmp, key_filename, make_minion, minion_conf, name, parallel, preseed_minion_keys, script, script_args, script_env, sock_dir, start_action, sudo, tmp_dir, tty, user‐ name salt/cloud/<VM NAME>/requesting Fired when salt-cloud sends the request to create a new VM. Variables · event -- description of the event. · location -- the location of the VM being requested. · kwargs -- options available as the VM is being requested: Action, ImageId, InstanceType, KeyName, MaxCount, MinCount, SecurityGroup.1 salt/cloud/<VM NAME>/querying Fired when salt-cloud queries data for a new instance. Variables · event -- description of the event. · instance_id -- the ID of the new VM. salt/cloud/<VM NAME>/tagging Fired when salt-cloud tags a new instance. Variables · event -- description of the event. · tags -- tags being set on the new instance. salt/cloud/<VM NAME>/waiting_for_ssh Fired while the salt-cloud deploy process is waiting for ssh to become available on the new instance. Variables · event -- description of the event. · ip_address -- IP address of the new instance. salt/cloud/<VM NAME>/deploy_script Fired once the deploy script is finished. Variables event -- description of the event. salt/cloud/<VM NAME>/created Fired once the new instance has been fully created. Variables · name -- the name of the VM being created. · event -- description of the event. · instance_id -- the ID of the new instance. · provider -- the cloud provider of the VM being created. · profile -- the cloud profile for the VM being created. salt/cloud/<VM NAME>/destroying Fired when salt-cloud requests the destruction of an instance. Variables · name -- the name of the VM being created. · event -- description of the event. · instance_id -- the ID of the new instance. salt/cloud/<VM NAME>/destroyed Fired when an instance has been destroyed. Variables · name -- the name of the VM being created. · event -- description of the event. · instance_id -- the ID of the new instance. Listening for Events Salt's Event Bus is used heavily within Salt and it is also written to integrate heavily with existing tooling and scripts. There is a variety of ways to consume it. From the CLI The quickest way to watch the event bus is by calling the state.event runner: salt-run state.event pretty=True That runner is designed to interact with the event bus from external tools and shell scripts. See the documentation for more examples. Remotely via the REST API Salt's event bus can be consumed salt.netapi.rest_cherrypy.app.Events as an HTTP stream from external tools or services. curl -SsNk https://salt-api.example.com:8000/events?token=05A3 From Python Python scripts can access the event bus only as the same system user that Salt is running as. The event system is accessed via the event library and can only be accessed by the same system user that Salt is running as. To listen to events a SaltEvent object needs to be created and then the get_event function needs to be run. The SaltEvent object needs to know the location that the Salt Unix sockets are kept. In the configuration this is the sock_dir option. The sock_dir option defaults to "/var/run/salt/master" on most systems. The following code will check for a single event: import salt.config import salt.utils.event opts = salt.config.client_config('/etc/salt/master') event = salt.utils.event.get_event( 'master', sock_dir=opts['sock_dir'], transport=opts['transport'], opts=opts) data = event.get_event() Events will also use a "tag". Tags allow for events to be filtered by prefix. By default all events will be returned. If only authentication events are desired, then pass the tag "salt/auth". The get_event method has a default poll time assigned of 5 seconds. To change this time set the "wait" option. The following example will only listen for auth events and will wait for 10 seconds instead of the default 5. data = event.get_event(wait=10, tag='salt/auth') To retrieve the tag as well as the event data, pass full=True: evdata = event.get_event(wait=10, tag='salt/job', full=True) tag, data = evdata['tag'], evdata['data'] Instead of looking for a single event, the iter_events method can be used to make a gener‐ ator which will continually yield salt events. The iter_events method also accepts a tag but not a wait time: for data in event.iter_events(tag='salt/auth'): print(data) And finally event tags can be globbed, such as they can be in the Reactor, using the fnmatch library. import fnmatch import salt.config import salt.utils.event opts = salt.config.client_config('/etc/salt/master') sevent = salt.utils.event.get_event( 'master', sock_dir=opts['sock_dir'], transport=opts['transport'], opts=opts) while True: ret = sevent.get_event(full=True) if ret is None: continue if fnmatch.fnmatch(ret['tag'], 'salt/job/*/ret/*'): do_something_with_job_return(ret['data']) Firing Events It is possible to fire events on either the minion's local bus or to fire events intended for the master. To fire a local event from the minion on the command line call the event.fire execution function: salt-call event.fire '{"data": "message to be sent in the event"}' 'tag' To fire an event to be sent up to the master from the minion call the event.send execution function. Remember YAML can be used at the CLI in function arguments: salt-call event.send 'myco/mytag/success' '{success: True, message: "It works!"}' If a process is listening on the minion, it may be useful for a user on the master to fire an event to it: # Job on minion import salt.utils.event event = salt.utils.event.MinionEvent(**__opts__) for evdata in event.iter_events(tag='customtag/'): return evdata # do your processing here... salt minionname event.fire '{"data": "message for the minion"}' 'customtag/african/unladen ↲ ' Firing Events from Python From Salt execution modules Events can be very useful when writing execution modules, in order to inform various pro‐ cesses on the master when a certain task has taken place. This is easily done using the normal cross-calling syntax: # /srv/salt/_modules/my_custom_module.py def do_something(): ''' Do something and fire an event to the master when finished CLI Example:: salt '*' my_custom_module:do_something ''' # do something! __salt__['event.send']('myco/my_custom_module/finished', { 'finished': True, 'message': "The something is finished!", }) From Custom Python Scripts Firing events from custom Python code is quite simple and mirrors how it is done at the CLI: import salt.client caller = salt.client.Caller() caller.sminion.functions['event.send']( 'myco/myevent/success', { 'success': True, 'message': "It works!", } )
BEACONS The beacon system allows the minion to hook into a variety of system processes and contin‐ ually monitor these processes. When monitored activity occurs in a system process, an event is sent on the Salt event bus that can be used to trigger a reactor. Salt beacons can currently monitor and send Salt events for many system activities, including: · file system changes · system load · service status · shell activity, such as user login · network and disk usage See beacon modules for a current list. NOTE: Salt beacons are an event generation mechanism. Beacons leverage the Salt reactor sys‐ tem to make changes when beacon events occur. Configuring Beacons Salt beacons do not require any changes to the system process that is being monitored, everything is configured using Salt. Beacons are typically enabled by placing a beacons: top level block in the minion configu‐ ration file: beacons: inotify: /etc/httpd/conf.d: {} /opt: {} The beacon system, like many others in Salt, can also be configured via the minion pillar, grains, or local config file. Beacon Monitoring Interval Beacons monitor on a 1-second interval by default. To set a different interval, provide an interval argument to a beacon. The following beacons run on 5- and 10-second intervals: beacons: inotify: /etc/httpd/conf.d: {} /opt: {} interval: 5 load: - 1m: - 0.0 - 2.0 - 5m: - 0.0 - 1.5 - 15m: - 0.1 - 1.0 - interval: 10 Avoiding Event Loops It is important to carefully consider the possibility of creating a loop between a reactor and a beacon. For example, one might set up a beacon which monitors whether a file is read which in turn fires a reactor to run a state which in turn reads the file and re-fires the beacon. To avoid these types of scenarios, the disable_during_state_run argument may be set. If a state run is in progress, the beacon will not be run on its regular interval until the minion detects that the state run has completed, at which point the normal beacon interval will resume. beacons: inotify: /etc/passwd: {} disable_during_state_run: True Beacon Example This example demonstrates configuring the inotify beacon to monitor a file for changes, and then create a backup each time a change is detected. NOTE: The inotify beacon requires Pyinotify on the minion, install it using salt myminion pkg.install python-inotify. First, on the Salt minion, add the following beacon configuration to /ect/salt/minion: beacons: inotify: home/user/importantfile: mask: - modify Replace user in the previous example with the name of your user account, and then save the configuration file and restart the minion service. Next, create a file in your home directory named importantfile and add some simple con‐ tent. The beacon is now set up to monitor this file for modifications. View Events on the Master On your Salt master, start the event runner using the following command: salt-run state.event pretty=true This runner displays events as they are received on the Salt event bus. To test the beacon you set up in the previous section, make and save a modification to the importantfile you created. You'll see an event similar to the following on the event bus: salt/beacon/minion1/inotify/home/user/importantfile { "_stamp": "2015-09-09T15:59:37.972753", "data": { "change": "IN_IGNORED", "id": "minion1", "path": "/home/user/importantfile" }, "tag": "salt/beacon/minion1/inotify/home/user/importantfile" } This indicates that the event is being captured and sent correctly. Now you can create a reactor to take action when this event occurs. Create a Reactor On your Salt master, create a file named srv/reactor/backup.sls. If the reactor directory doesn't exist, create it. Add the following to backup.sls: backup file: cmd.file.copy: - tgt: {{ data['data']['id'] }} - arg: - {{ data['data']['path'] }} - {{ data['data']['path'] }}.bak Next, add the code to trigger the reactor to ect/salt/master: reactor: - salt/beacon/*/inotify/*/importantfile: - /srv/reactor/backup.sls This reactor creates a backup each time a file named importantfile is modified on a minion that has the inotify beacon configured as previously shown. NOTE: You can have only one top level reactor section, so if one already exists, add this code to the existing section. See Understanding the Structure of Reactor Formulas to learn more about reactor SLS syntax. Start the Salt Master in Debug Mode To help with troubleshooting, start the Salt master in debug mode: service salt-master stop salt-master -l debug When debug logging is enabled, event and reactor data are displayed so you can discover syntax and other issues. Trigger the Reactor On your minion, make and save another change to importantfile. On the Salt master, you'll see debug messages that indicate the event was received and the file.copy job was sent. When you list the directory on the minion, you'll now see importantfile.bak. All beacons are configured using a similar process of enabling the beacon, writing a reac‐ tor SLS, and mapping a beacon event to the reactor SLS. Writing Beacon Plugins Beacon plugins use the standard Salt loader system, meaning that many of the constructs from other plugin systems holds true, such as the __virtual__ function. The important function in the Beacon Plugin is the beacon function. When the beacon is configured to run, this function will be executed repeatedly by the minion. The beacon function therefore cannot block and should be as lightweight as possible. The beacon also must return a list of dicts, each dict in the list will be translated into an event on the master. Please see the inotify beacon as an example. The beacon Function The beacons system will look for a function named beacon in the module. If this function is not present then the beacon will not be fired. This function is called on a regular basis and defaults to being called on every iteration of the minion, which can be tens to hundreds of times a second. This means that the beacon function cannot block and should not be CPU or IO intensive. The beacon function will be passed in the configuration for the executed beacon. This makes it easy to establish a flexible configuration for each called beacon. This is also the preferred way to ingest the beacon's configuration as it allows for the configuration to be dynamically updated while the minion is running by configuring the beacon in the minion's pillar. The Beacon Return The information returned from the beacon is expected to follow a predefined structure. The returned value needs to be a list of dictionaries (standard python dictionaries are pre‐ ferred, no ordered dicts are needed). The dictionaries represent individual events to be fired on the minion and master event buses. Each dict is a single event. The dict can contain any arbitrary keys but the 'tag' key will be extracted and added to the tag of the fired event. The return data structure would look something like this: [{'changes': ['/foo/bar'], 'tag': 'foo'}, {'changes': ['/foo/baz'], 'tag': 'bar'}] Calling Execution Modules Execution modules are still the preferred location for all work and system interaction to happen in Salt. For this reason the __salt__ variable is available inside the beacon. Please be careful when calling functions in __salt__, while this is the preferred means of executing complicated routines in Salt not all of the execution modules have been written with beacons in mind. Watch out for execution modules that may be CPU intense or IO bound. Please feel free to add new execution modules and functions to back specific beacons. Distributing Custom Beacons Custom beacons can be distributed to minions using saltutil, see Dynamic Module Distribu‐ tion.
SALT ENGINES New in version 2015.8.0. Salt Engines are long-running, external system processes that leverage Salt. · Engines have access to Salt configuration, execution modules, and runners (__opts__, __salt__, and __runners__). · Engines are executed in a separate process that is monitored by Salt. If a Salt engine stops, it is restarted automatically. · Engines can run on the Salt master and on Salt minions. Salt engines enhance and replace the external processes functionality. Configuration Salt engines are configured under an engines top-level section in your Salt master or Salt minion configuration. Provide a list of engines and parameters under this section. engines: - logstash: host: log.my_network.com port: 5959 Salt engines must be in the Salt path, or you can add the engines_dir option in your Salt master configuration with a list of directories under which Salt attempts to find Salt engines. Writing an Engine An example Salt engine, https://github.com/saltstack/salt/blob/develop/salt/engines/test.py, is available in the Salt source. To develop an engine, the only requirement is that your module implement the start() function.
RUNNING CUSTOM MASTER PROCESSES NOTE: Salt engines are a new feature in 2015.8.0 that let you run custom processes on the Salt master and on Salt minions. Salt engines provide more functionality than ext_pro‐ cesses by accepting arguments, and by providing access to Salt config, execution mod‐ ules, and runners. In addition to the processes that the Salt master automatically spawns, it is possible to configure it to start additional custom processes. This is useful if a dedicated process is needed that should run throughout the life of the Salt master. For periodic independent tasks, a scheduled runner may be more appropriate. Processes started in this way will be restarted if they die and will be killed when the Salt master is shut down. Example Configuration Processes are declared in the master config file with the ext_processes option. Processes will be started in the order they are declared. ext_processes: - mymodule.TestProcess - mymodule.AnotherProcess Example Process Class # Import python libs import time import logging from multiprocessing import Process # Import Salt libs from salt.utils.event import SaltEvent log = logging.getLogger(__name__) class TestProcess(Process): def __init__(self, opts): Process.__init__(self) self.opts = opts def run(self): self.event = SaltEvent('master', self.opts['sock_dir']) i = 0 while True: self.event.fire_event({'iteration': i}, 'ext_processes/test{0}') time.sleep(60)
HIGH AVAILABILITY FEATURES IN SALT Salt supports several features for high availability and fault tolerance. Brief documen‐ tation for these features is listed alongside their configuration parameters in Configura‐ tion file examples. Multimaster Salt minions can connect to multiple masters at one time by configuring the master config‐ uration parameter as a YAML list of all the available masters. By default, all masters are "hot", meaning that any master can direct commands to the Salt infrastructure. In a multimaster configuration, each master must have the same cryptographic keys, and minion keys must be accepted on all masters separately. The contents of file_roots and pillar_roots need to be kept in sync with processes external to Salt as well A tutorial on setting up multimaster with "hot" masters is here: Multimaster Tutorial Multimaster with Failover Changing the master_type parameter from str to failover will cause minions to connect to the first responding master in the list of masters. Every master_alive_check seconds the minions will check to make sure the current master is still responding. If the master does not respond, the minion will attempt to connect to the next master in the list. If the minion runs out of masters, the list will be recycled in case dead masters have been restored. Note that master_alive_check must be present in the minion configuration, or else the recurring job to check master status will not get scheduled. Failover can be combined with PKI-style encrypted keys, but PKI is NOT REQUIRED to use failover. Multimaster with PKI and Failover is discussed in this tutorial master_type: failover can be combined with master_shuffle: True to spread minion connec‐ tions across all masters (one master per minion, not each minion connecting to all mas‐ ters). Adding Salt Syndics into the mix makes it possible to create a load-balanced Salt infrastructure. If a master fails, minions will notice and select another master from the available list. Syndic Salt's Syndic feature is a way to create differing infrastructure topologies. It is not strictly an HA feature, but can be treated as such. With the syndic, a Salt infrastructure can be partitioned in such a way that certain mas‐ ters control certain segments of the infrastructure, and "Master of Masters" nodes can control multiple segments underneath them. Syndics are covered in depth in Salt Syndic. Syndic with Multimaster New in version 2015.5.0. Syndic with Multimaster lets you connect a syndic to multiple masters to provide an addi‐ tional layer of redundancy in a syndic configuration. Syndics are covered in depth in Salt Syndic.
SALT SYNDIC The most basic or typical Salt topology consists of a single Master node controlling a group of Minion nodes. An intermediate node type, called Syndic, when used offers greater structural flexibility and scalability in the construction of Salt topologies than topolo‐ gies constructed only out of Master and Minion node types. A Syndic node can be thought of as a special passthrough Minion node. A Syndic node con‐ sists of a salt-syndic daemon and a salt-master daemon running on the same system. The salt-master daemon running on the Syndic node controls a group of lower level Minion nodes and the salt-syndic daemon connects higher level Master node, sometimes called a Master of Masters. The salt-syndic daemon relays publications and events between the Master node and the local salt-master daemon. This gives the Master node control over the Minion nodes attached to the salt-master daemon running on the Syndic node. Configuring the Syndic To setup a Salt Syndic you need to tell the Syndic node and its Master node about each other. If your Master node is located at 10.10.0.1, then your configurations would be: On the Syndic node: # /etc/salt/master syndic_master: 10.10.0.1 # may be either an IP address or a hostname # /etc/salt/minion # id is shared by the salt-syndic daemon and a possible salt-minion daemon # on the Syndic node id: my_syndic On the Master node: # /etc/salt/master order_masters: True The syndic_master option tells the Syndic node where to find the Master node in the same way that the master option tells a Minion node where to find a Master node. The id option is used by the salt-syndic daemon to identify with the Master node and if unset will default to the hostname or IP address of the Syndic just as with a Minion. The order_masters option configures the Master node to send extra information with its publications that is needed by Syndic nodes connected directly to it. NOTE: Each Syndic must provide its own file_roots directory. Files will not be automatically transferred from the Master node. Configuring the Syndic with Multimaster New in version 2015.5.0. Syndic with Multimaster lets you connect a syndic to multiple masters to provide an addi‐ tional layer of redundancy in a syndic configuration. Higher level masters should first be configured in a multimaster configuration. See Mul‐ timaster Tutorial. On the syndic, the syndic_master option is populated with a list of the higher level mas‐ ters. Since each syndic is connected to each master, jobs sent from any master are forwarded to minions that are connected to each syndic. If the master_id value is set in the master config on the higher level masters, job results are returned to the master that originated the request in a best effort fashion. Events/jobs without a master_id are returned to any available master. Running the Syndic The salt-syndic daemon is a separate process that needs to be started in addition to the salt-master daemon running on the Syndic node. Starting the salt-syndic daemon is the same as starting the other Salt daemons. The Master node in many ways sees the Syndic as an ordinary Minion node. In particular, the Master will need to accept the Syndic's Minion key as it would for any other Minion. On the Syndic node: # salt-syndic or # service salt-syndic start On the Master node: # salt-key -a my_syndic The Master node will now be able to control the Minion nodes connected to the Syndic. Only the Syndic key will be listed in the Master node's key registry but this also means that key activity between the Syndic's Minions and the Syndic does not encumber the Master node. In this way, the Syndic's key on the Master node can be thought of as a placeholder for the keys of all the Minion and Syndic nodes beneath it, giving the Master node a clear, high level structural view on the Salt cluster. On the Master node: # salt-key -L Accepted Keys: my_syndic Denied Keys: Unaccepted Keys: Rejected Keys: # salt '*' test.ping minion_1: True minion_2: True minion_4: True minion_3: True Topology A Master node (a node which is itself not a Syndic to another higher level Master node) must run a salt-master daemon and optionally a salt-minion daemon. A Syndic node must run salt-syndic and salt-master daemons and optionally a salt-minion daemon. A Minion node must run a salt-minion daemon. When a salt-master daemon issues a command, it will be received by the Syndic and Minion nodes directly connected to it. A Minion node will process the command in the way it ordinarily would. On a Syndic node, the salt-syndic daemon will relay the command to the salt-master daemon running on the Syndic node, which then propagates the command to to the Minions and Syndics connected to it. When events and job return data are generated by salt-minion daemons, they are aggregated by the salt-master daemon they are connected to, which salt-master daemon then relays the data back through its salt-syndic daemon until the data reaches the Master or Syndic node that issued the command. Syndic wait NOTE: To reduce the amount of time the CLI waits for Minions to respond, install a Minion on the Syndic or tune the value of the syndic_wait configuration. While it is possible to run a Syndic without a Minion installed on the same system, it is recommended, for a faster CLI response time, to do so. Without a Minion installed on the Syndic node, the timeout value of syndic_wait increases significantly - about three-fold. With a Minion installed on the Syndic, the CLI timeout resides at the value defined in syndic_wait. NOTE: If you have a very large infrastructure or many layers of Syndics, you may find that the CLI doesn't wait long enough for the Syndics to return their events. If you think this is the case, you can set the syndic_wait value in the Master configs on the Master or Syndic nodes from which commands are executed. The default value is 5, and should work for the majority of deployments. In order for a Master or Syndic node to return information from Minions that are below their Syndics, the CLI requires a short wait time in order to allow the Syndics to gather responses from their Minions. This value is defined in the syndic_wait config option and has a default of five seconds. Syndic config options These are the options that can be used to configure a Syndic node. Note that other than id, Syndic config options are placed in the Master config on the Syndic node. · id: Syndic id (shared by the salt-syndic daemon with a potential salt-minion daemon on the same system) · syndic_master: Master node IP address or hostname · syndic_master_port: Master node ret_port · syndic_log_file: path to the logfile (absolute or not) · syndic_pidfile: path to the pidfile (absolute or not) · syndic_wait: time in seconds to wait on returns from this syndic
SALT PROXY MINION Proxy minions are a developing Salt feature that enables controlling devices that, for whatever reason, cannot run a standard salt-minion. Examples include network gear that has an API but runs a proprietary OS, devices with limited CPU or memory, or devices that could run a minion, but for security reasons, will not. Proxy minions are not an "out of the box" feature. Because there are an infinite number of controllable devices, you will most likely have to write the interface yourself. Fortu‐ nately, this is only as difficult as the actual interface to the proxied device. Devices that have an existing Python module (PyUSB for example) would be relatively simple to interface. Code to control a device that has an HTML REST-based interface should be easy. Code to control your typical housecat would be excellent source material for a PhD thesis. Salt proxy-minions provide the 'plumbing' that allows device enumeration and discovery, control, status, remote execution, and state management. See the Proxy Minion Walkthrough for an end-to-end demonstration of a working proxy min‐ ion. See the Proxy Minion SSH Walkthrough for an end-to-end demonstration of a working SSH proxy minion. New in 2015.8.2 BREAKING CHANGE: Adding the proxymodule variable to __opts__ is deprecated. The proxy‐ module variable has been moved a new globally-injected variable called __proxy__. A related configuration option called add_proxymodule_to_opts has been added and defaults to True. In the next major release, codenamed Boron, this variable will default to False. In the meantime, proxies that functioned under 2015.8.0 and .1 should continue to work under 2015.8.2. You should rework your proxy code to use __proxy__ as soon as possible. The rest_sample example proxy minion has been updated to use __proxy__. This change was made because proxymodules are a LazyLoader object, but LazyLoaders cannot be serialized. __opts__ gets serialized, and so things like saltutil.sync_all and state.highstate would throw exceptions. Also in this release, proxymodules can be stored on the master in /srv/salt/_proxy. A new saltutil function called sync_proxies will transfer these to remote proxy minions. Note that you must restart the salt-proxy daemon to pick up these changes. In addition, a salt.utils helper function called is_proxy() was added to make it easier to tell when the running minion is a proxy minion. New in 2015.8 Starting with the 2015.8 release of Salt, proxy processes are no longer forked off from a controlling minion. Instead, they have their own script salt-proxy which takes mostly the same arguments that the standard Salt minion does with the addition of --proxyid. This is the id that the salt-proxy will use to identify itself to the master. Proxy configura‐ tions are still best kept in Pillar and their format has not changed. This change allows for better process control and logging. Proxy processes can now be listed with standard process management utilities (ps from the command line). Also, a full Salt minion is no longer required (though it is still strongly recommended) on machines hosting proxies. Getting Started The following diagram may be helpful in understanding the structure of a Salt installation that includes proxy-minions: [image] The key thing to remember is the left-most section of the diagram. Salt's nature is to have a minion connect to a master, then the master may control the minion. However, for proxy minions, the target device cannot run a minion. After the proxy minion is started and initiates its connection to the 'dumb' device, it connects back to the salt-master and for all intents and purposes looks like just another minion to the Salt master. To create support for a proxied device one needs to create four things: 1. The proxy_connection_module (located in salt/proxy). 2. The grains support code (located in salt/grains). 3. Salt modules specific to the controlled device. 4. Salt states specific to the controlled device. Configuration parameters Proxy minions require no configuration parameters in /etc/salt/master. Salt's Pillar system is ideally suited for configuring proxy-minions. Proxies can either be designated via a pillar file in pillar_roots, or through an external pillar. External pillars afford the opportunity for interfacing with a configuration management system, database, or other knowledgeable system that that may already contain all the details of proxy targets. To use static files in pillar_roots, pattern your files after the follow‐ ing examples, which are based on the diagram above: /srv/pillar/top.sls base: dumbdevice1: - dumbdevice1 dumbdevice2: - dumbdevice2 dumbdevice3: - dumbdevice3 dumbdevice4: - dumbdevice4 dumbdevice5: - dumbdevice5 dumbdevice6: - dumbdevice6 dumbdevice7: - dumbdevice7 /srv/pillar/dumbdevice1.sls proxy: proxytype: networkswitch host: 172.23.23.5 username: root passwd: letmein /srv/pillar/dumbdevice2.sls proxy: proxytype: networkswitch host: 172.23.23.6 username: root passwd: letmein /srv/pillar/dumbdevice3.sls proxy: proxytype: networkswitch host: 172.23.23.7 username: root passwd: letmein /srv/pillar/dumbdevice4.sls proxy: proxytype: i2c_lightshow i2c_address: 1 /srv/pillar/dumbdevice5.sls proxy: proxytype: i2c_lightshow i2c_address: 2 /srv/pillar/dumbdevice6.sls proxy: proxytype: 433mhz_wireless /srv/pillar/dumbdevice7.sls proxy: proxytype: sms_serial deventry: /dev/tty04 Note the contents of each minioncontroller key may differ widely based on the type of device that the proxy-minion is managing. In the above example · dumbdevices 1, 2, and 3 are network switches that have a management interface available at a particular IP address. · dumbdevices 4 and 5 are very low-level devices controlled over an i2c bus. In this case the devices are physically connected to machine 'minioncontroller2', and are addressable on the i2c bus at their respective i2c addresses. · dumbdevice6 is a 433 MHz wireless transmitter, also physically connected to minioncon‐ troller2 · dumbdevice7 is an SMS gateway connected to machine minioncontroller3 via a serial port. Because of the way pillar works, each of the salt-proxy processes that fork off the proxy minions will only see the keys specific to the proxies it will be handling. Also, in general, proxy-minions are lightweight, so the machines that run them could con‐ ceivably control a large number of devices. To run more than one proxy from a single machine, simply start an additional proxy process with --proxyid set to the id to which you want the proxy to bind. It is possible for the proxy services to be spread across many machines if necessary, or intentionally run on machines that need to control devices because of some physical interface (e.g. i2c and serial above). Another reason to divide proxy services might be security. In more secure environments only certain machines may have a network path to certain devices. Proxymodules A proxy module encapsulates all the code necessary to interface with a device. Proxymod‐ ules are located inside the salt.proxy module. At a minimum a proxymodule object must implement the following functions: __virtual__(): This function performs the same duty that it does for other types of Salt modules. Logic goes here to determine if the module can be loaded, checking for the pres‐ ence of Python modules on which the proxy depends. Returning False will prevent the mod‐ ule from loading. init(opts): Perform any initialization that the device needs. This is a good place to bring up a persistent connection to a device, or authenticate to create a persistent authorization token. shutdown(): Code to cleanly shut down or close a connection to a controlled device goes here. This function must exist, but can contain only the keyword pass if there is no shutdown logic required. ping(): While not required, it is highly recommended that this function also be defined in the proxymodule. The code for ping should contact the controlled device and make sure it is really available. Pre 2015.8 the proxymodule also must have an id() function. 2015.8 and following don't use this function because the proxy's id is required on the command line. id(opts): Returns a unique, unchanging id for the controlled device. This is the "name" of the device, and is used by the salt-master for targeting and key authentication. Here is an example proxymodule used to interface to a very simple REST server. Code for the server is in the salt-contrib GitHub repository This proxymodule enables "service" enumeration, starting, stopping, restarting, and sta‐ tus; "package" installation, and a ping. # -*- coding: utf-8 -*- ''' This is a simple proxy-minion designed to connect to and communicate with the bottle-based web service contained in https://github.com/saltstack/salt-contrib/proxyminion_rest_example ''' from __future__ import absolute_import # Import python libs import logging import salt.utils.http HAS_REST_EXAMPLE = True # This must be present or the Salt loader won't load this module __proxyenabled__ = ['rest_sample'] # Variables are scoped to this module so we can have persistent data # across calls to fns in here. GRAINS_CACHE = {} DETAILS = {} # Want logging! log = logging.getLogger(__file__) # This does nothing, it's here just as an example and to provide a log # entry when the module is loaded. def __virtual__(): ''' Only return if all the modules are available ''' log.debug('rest_sample proxy __virtual__() called...') return True # Every proxy module needs an 'init', though you can # just put a 'pass' here if it doesn't need to do anything. def init(opts): log.debug('rest_sample proxy init() called...') # Save the REST URL DETAILS['url'] = opts['proxy']['url'] # Make sure the REST URL ends with a '/' if not DETAILS['url'].endswith('/'): DETAILS['url'] += '/' def id(opts): ''' Return a unique ID for this proxy minion. This ID MUST NOT CHANGE. If it changes while the proxy is running the salt-master will get really confused and may stop talking to this minion ''' r = salt.utils.http.query(opts['proxy']['url']+'id', decode_type='json', decode=True) return r['dict']['id'].encode('ascii', 'ignore') def grains(): ''' Get the grains from the proxied device ''' if not GRAINS_CACHE: r = salt.utils.http.query(DETAILS['url']+'info', decode_type='json', decode=True) GRAINS_CACHE = r['dict'] return GRAINS_CACHE def grains_refresh(): ''' Refresh the grains from the proxied device ''' GRAINS_CACHE = {} return grains() def service_start(name): ''' Start a "service" on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'service/start/'+name, decode_type='json', de ↲ code=True) return r['dict'] def service_stop(name): ''' Stop a "service" on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'service/stop/'+name, decode_type='json', dec ↲ ode=True) return r['dict'] def service_restart(name): ''' Restart a "service" on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'service/restart/'+name, decode_type='json', ↲ decode=True) return r['dict'] def service_list(): ''' List "services" on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'service/list', decode_type='json', decode=Tr ↲ ue) return r['dict'] def service_status(name): ''' Check if a service is running on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'service/status/'+name, decode_type='json', d ↲ ecode=True) return r['dict'] def package_list(): ''' List "packages" installed on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'package/list', decode_type='json', decode=Tr ↲ ue) return r['dict'] def package_install(name, **kwargs): ''' Install a "package" on the REST server ''' cmd = DETAILS['url']+'package/install/'+name if 'version' in kwargs: cmd += '/'+kwargs['version'] else: cmd += '/1.0' r = salt.utils.http.query(cmd, decode_type='json', decode=True) def package_remove(name): ''' Remove a "package" on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'package/remove/'+name, decode_type='json', d ↲ ecode=True) return r['dict'] def package_status(name): ''' Check the installation status of a package on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'package/status/'+name, decode_type='json', d ↲ ecode=True) return r['dict'] def ping(): ''' Is the REST server up? ''' r = salt.utils.http.query(DETAILS['url']+'ping', decode_type='json', decode=True) try: return r['dict'].get('ret', False) except Exception: return False def shutdown(opts): ''' For this proxy shutdown is a no-op ''' log.debug('rest_sample proxy shutdown() called...') pass Grains are data about minions. Most proxied devices will have a paltry amount of data as compared to a typical Linux server. By default, a proxy minion will have several grains taken from the host. Salt core code requires values for kernel, os, and os_family--all of these are forced to be proxy for proxy-minions. To add others to your proxy minion for a particular device, create a file in salt/grains named [proxytype].py and place inside it the different functions that need to be run to collect the data you are interested in. Here's an example: The __proxyenabled__ directive Salt execution modules, by, and large, cannot "automatically" work with proxied devices. Execution modules like pkg or sqlite3 have no meaning on a network switch or a housecat. For an execution module to be available to a proxy-minion, the __proxyenabled__ variable must be defined in the module as an array containing the names of all the proxytypes that this module can support. The array can contain the special value * to indicate that the module supports all proxies. If no __proxyenabled__ variable is defined, then by default, the execution module is unavailable to any proxy. Here is an excerpt from a module that was modified to support proxy-minions: __proxyenabled__ = ['*'] [...] def ping(): if not salt.utils.is_proxy(): return True else: ping_cmd = __opts__['proxy']['proxytype'] + '.ping' if __opts__.get('add_proxymodule_to_opts', False): return __opts__['proxymodule'][ping_cmd]() else: return __proxy__[ping_cmd]() And then in salt.proxy.rest_sample.py we find def ping(): ''' Is the REST server up? ''' r = salt.utils.http.query(DETAILS['url']+'ping', decode_type='json', decode=True) try: return r['dict'].get('ret', False) except Exception: return False Salt Proxy Minion End-to-End Example The following is walkthrough that documents how to run a sample REST service and configure one or more proxy minions to talk to and control it. 1. Ideally, create a Python virtualenv in which to run the REST service. This is not strictly required, but without a virtualenv you will need to install bottle via pip globally on your system 2. Clone https://github.com/saltstack/salt-contrib and copy the contents of the directory proxyminion_rest_example somewhere on a machine that is reachable from the machine on which you want to run the salt-proxy. This machine needs Python 2.7 or later. 3. Install bottle version 0.12.8 via pip or easy_install pip install bottle==0.12.8 4. Run python rest.py --help for usage 5. Start the REST API on an appropriate port and IP. 6. Load the REST service's status page in your browser by going to the IP/port combination (e.g. http://127.0.0.1:8000) 7. You should see a page entitled "Salt Proxy Minion" with two sections, one for "ser‐ vices" and one for "packages" and you should see a log entry in the terminal where you started the REST process indicating that the index page was retrieved. [image] Now, configure your salt-proxy. 1. Edit /etc/salt/proxy and add an entry for your master's location master: localhost 2. On your salt-master, ensure that pillar is configured properly. Select an ID for your proxy (in this example we will name the proxy with the letter 'p' followed by the port the proxy is answering on). In your pillar topfile, place an entry for your proxy: base: 'p8000': - p8000 This says that Salt's pillar should load some values for the proxy p8000 from the file /srv/pillar/p8000.sls (if you have not changed your default pillar_roots) 3. In the pillar root for your base environment, create this file: p8000.sls --------- proxy: proxytype: rest_sample url: http://<IP your REST listens on>:port In other words, if your REST service is listening on port 8000 on 127.0.0.1 the 'url' key above should say url: http://127.0.0.1:8000 4. Make sure your salt-master is running. 5. Start the salt-proxy in debug mode salt-proxy --proxyid=p8000 -l debug 6. Accept your proxy's key on your salt-master salt-key -y -a p8000 The following keys are going to be accepted: Unaccepted Keys: p8000 Key for minion p8000 accepted. 7. Now you should be able to ping your proxy. When you ping, you should see a log entry in the terminal where the REST service is running. salt p8000 test.ping 8. The REST service implements a degenerately simple pkg and service provider as well as a small set of grains. To "install" a package, use a standard pkg.install. If you pass '==' and a verrsion number after the package name then the service will parse that and accept that as the package's version. 9. Try running salt p8000 grains.items to see what grains are available. You can target proxies via grains if you like. 10. You can also start and stop the available services (apache, redbull, and postgresql with service.start, etc. 11. States can be written to target the proxy. Feel free to experiment with them. SSH Proxymodules See above for a general introduction to writing proxy modules. All of the guidelines that apply to REST are the same for SSH. This sections specifically talks about the SSH proxy module and explains the working of the example proxy module ssh_sample. Here is a simple example proxymodule used to interface to a device over SSH. Code for the SSH shell is in the salt-contrib GitHub repository This proxymodule enables "package" installation. # -*- coding: utf-8 -*- ''' This is a simple proxy-minion designed to connect to and communicate with a server that exposes functionality via SSH. This can be used as an option when the device does not provide an api over HTTP and doesn't have the python stack to run a minion. ''' from __future__ import absolute_import # Import python libs import json import logging # Import Salt's libs from salt.utils.vt_helper import SSHConnection from salt.utils.vt import TerminalException # This must be present or the Salt loader won't load this module __proxyenabled__ = ['ssh_sample'] DETAILS = {} # Want logging! log = logging.getLogger(__file__) # This does nothing, it's here just as an example and to provide a log # entry when the module is loaded. def __virtual__(): ''' Only return if all the modules are available ''' log.info('ssh_sample proxy __virtual__() called...') return True def init(opts): ''' Required. Can be used to initialize the server connection. ''' try: DETAILS['server'] = SSHConnection(host=__opts__['proxy']['host'], username=__opts__['proxy']['username'], password=__opts__['proxy']['password']) # connected to the SSH server out, err = DETAILS['server'].sendline('help') except TerminalException as e: log.error(e) return False def shutdown(opts): ''' Disconnect ''' DETAILS['server'].close_connection() def parse(out): ''' Extract json from out. Parameter out: Type string. The data returned by the ssh command. ''' jsonret = [] in_json = False for ln_ in out.split('\n'): if '{' in ln_: in_json = True if in_json: jsonret.append(ln_) if '}' in ln_: in_json = False return json.loads('\n'.join(jsonret)) def package_list(): ''' List "packages" by executing a command via ssh This function is called in response to the salt command ..code-block::bash salt target_minion pkg.list_pkgs ''' # Send the command to execute out, err = DETAILS['server'].sendline('pkg_list') # "scrape" the output and return the right fields as a dict return parse(out) def package_install(name, **kwargs): ''' Install a "package" on the REST server ''' cmd = 'pkg_install ' + name if 'version' in kwargs: cmd += '/'+kwargs['version'] else: cmd += '/1.0' # Send the command to execute out, err = DETAILS['server'].sendline(cmd) # "scrape" the output and return the right fields as a dict return parse(out) def package_remove(name): ''' Remove a "package" on the REST server ''' cmd = 'pkg_remove ' + name # Send the command to execute out, err = DETAILS['server'].sendline(cmd) # "scrape" the output and return the right fields as a dict return parse(out) Connection Setup The init() method is responsible for connection setup. It uses the host, username and password config variables defined in the pillar data. The prompt kwarg can be passed to SSHConnection if your SSH server's prompt differs from the example's prompt (Cmd). Instan‐ tiating the SSHConnection class establishes an SSH connection to the ssh server (using Salt VT). Command execution The package_* methods use the SSH connection (established in init()) to send commands out to the SSH server. The sendline() method of SSHConnection class can be used to send com‐ mands out to the server. In the above example we send commands like pkg_list or pkg_install. You can send any SSH command via this utility. Output parsing Output returned by sendline() is a tuple of strings representing the stdout and the stderr respectively. In the toy example shown we simply scrape the output and convert it to a python dictionary, as shown in the parse method. You can tailor this method to match your parsing logic. Connection teardown The shutdown method is responsible for calling the close_connection() method of SSHConnec‐ tion class. This ends the SSH connection to the server. For more information please refer to class SSHConnection. Salt Proxy Minion SSH End-to-End Example The following is walkthrough that documents how to run a sample SSH service and configure one or more proxy minions to talk to and control it. 1. This walkthrough uses a custom SSH shell to provide an end to end example. Any other shells can be used too. 2. Setup the proxy command shell as shown https://github.com/saltstack/salt-contrib/tree/master/proxyminion_ssh_example Now, configure your salt-proxy. 1. Edit /etc/salt/proxy and add an entry for your master's location master: localhost add_proxymodule_to_opts: False multiprocessing: False 2. On your salt-master, ensure that pillar is configured properly. Select an ID for your proxy (in this example we will name the proxy with the letter 'p' followed by the port the proxy is answering on). In your pillar topfile, place an entry for your proxy: base: 'p8000': - p8000 This says that Salt's pillar should load some values for the proxy p8000 from the file /srv/pillar/p8000.sls (if you have not changed your default pillar_roots) 3. In the pillar root for your base environment, create this file: p8000.sls --------- proxy: proxytype: ssh_sample host: saltyVM username: salt password: badpass 4. Make sure your salt-master is running. 5. Start the salt-proxy in debug mode salt-proxy --proxyid=p8000 -l debug 6. Accept your proxy's key on your salt-master salt-key -y -a p8000 The following keys are going to be accepted: Unaccepted Keys: p8000 Key for minion p8000 accepted. 7. Now you should be able to run commands on your proxy. salt p8000 pkg.list_pkgs 8. The SSH shell implements a degenerately simple pkg. To "install" a package, use a standard pkg.install. If you pass '==' and a verrsion number after the package name then the service will parse that and accept that as the package's version.
SALT PACKAGE MANAGER The Salt Package Manager, or SPM, allows Salt formulas to be packaged, for ease of deploy‐ ment. The design of SPM was influenced by other existing packaging systems including RPM, Yum, and Pacman. Building Packages Before SPM can install packages, they must be built. The source for these packages is often a Git repository, such as those found at the saltstack-formulas organization on GitHub. FORMULA In addition to the formula itself, a FORMULA file must exist which describes the package. An example of this file is: name: apache os: RedHat, Debian, Ubuntu, Suse, FreeBSD os_family: RedHat, Debian, Suse, FreeBSD version: 201506 release: 2 summary: Formula for installing Apache description: Formula for installing Apache Required Fields This file must contain at least the following fields: name The name of the package, as it will appear in the package filename, in the repository metadata, and the package database. Even if the source formula has -formula in its name, this name should probably not include that. For instance, when packaging the apache-for‐ mula, the name should be set to apache. os The value of the os grain that this formula supports. This is used to help users know which operating systems can support this package. os_family The value of the os_family grain that this formula supports. This is used to help users know which operating system families can support this package. version The version of the package. While it is up to the organization that manages this package, it is suggested that this version is specified in a YYYYMM format. For instance, if this version was released in June 2015, the package version should be 201506. If multiple released are made in a month, the releasee field should be used. minimum_version Minimum recommended version of Salt to use this formula. Not currently enforced. release This field refers primarily to a release of a version, but also to multiple versions within a month. In general, if a version has been made public, and immediate updates need to be made to it, this field should also be updated. summary A one-line description of the package. description A more detailed description of the package which can contain more than one line. Optional Fields The following fields may also be present. top_level_dir This field is optional, but highly recommended. If it is not specified, the package name will be used. Formula repositories typically do not store .sls files in the root of the repository; instead they are stored in a subdirectory. For instance, an apache-formula repository would contain a directory called apache, which would contain an init.sls, plus a number of other related files. In this instance, the top_level_dir should be set to apache. Files outside the top_level_dir, such as README.rst, FORMULA, and LICENSE will not be installed. The exceptions to this rule are files that are already treated specially, such as pillar.example and _modules/. recommended A list of optional packages that are recommended to be installed with the package. This list is displayed in an informational message when the package is installed to SPM. Building a Package Once a FORMULA file has been created, it is placed into the root of the formula that is to be turned into a package. The spm build command is used to turn that formula into a pack‐ age: spm build /path/to/saltstack-formulas/apache-formula The resulting file will be placed in the build directory. By default this directory is located at /srv/spm/. Building Repositories Once one or more packages have been built, they can be made available to SPM via a package repository. Place the packages into the directory to be served and issue an spm cre‐ ate_repo command: spm create_repo /srv/spm This command is used, even if repository metadata already exists in that directory. SPM will regenerate the repository metadata again, using all of the packages in that direc‐ tory. Configuring Remote Repositories Before SPM can use a repository, two things need to happen. First, SPM needs to know where the repositories are. Then it needs to pull down the repository metadata. Repository Configuration Files Normally repository configuration files are placed in the /etc/salt/spm.repos.d. These files contain the name of the repository, and the link to that repository: my_repo: url: https://spm.example.com/ The URL can use http, https, ftp, or file. local_repo: url: file:///srv/spm Updating Local Repository Metadata Once the repository is configured, its metadata needs to be downloaded. At the moment, this is a manual process, using the spm update_repo command. spm update_repo Installing Packages Packages may be installed either from a local file, or from an SPM repository. To install from a repository, use the spm install command: spm install apache To install from a local file, use the spm local install command: spm local install /srv/spm/apache-201506-1.spm Currently, SPM does not check to see if files are already in place before installing them. That means that existing files will be overwritten without warning. Pillars Formula packages include a pillar.example file. Rather than being placed in the formula directory, this file is renamed to <formula name>.sls.orig and placed in the pillar_path, where it can be easily updated to meet the user's needs. Loader Modules When an execution module is placed in <file_roots>/_modules/ on the master, it will auto‐ matically be synced to minions, the next time a sync operation takes place. Other modules are also propagated this way: state modules can be placed in _states/, and so on. When SPM detects a file in a package which resides in one of these directories, that directory will be placed in <file_roots> instead of in the formula directory with the rest of the files. Removing Packages Packages may be removed once they are installed using the spm remove command. spm remove apache If files have been modified, they will not be removed. Empty directories will also be removed. Technical Information Packages are built using BZ2-compressed tarballs. By default, the package database is stored using the sqlite3 driver (see Loader Modules below). Support for these are built into Python, and so no external dependencies are needed. All other files belonging to SPM use YAML, for portability and ease of use and maintain‐ ability. SPM-Specific Loader Modules SPM was designed to behave like traditional package managers, which apply files to the filesystem and store package metadata in a local database. However, because modern infra‐ structures often extend beyond those use cases, certain parts of SPM have been broken out into their own set of modules. Package Database By default, the package database is stored using the sqlite3 module. This module was cho‐ sen because support for SQLite3 is built into Python itself. Please see the SPM Development Guide for information on creating new modules for package database management. Package Files By default, package files are installed using the local module. This module applies files to the local filesystem, on the machine that the package is installed on. Please see the SPM Development Guide for information on creating new modules for package file management. SPM Configuration There are a number of options that are specific to SPM. They may be configured in the mas‐ ter configuration file, or in SPM's own spm configuration file (normally located at /etc/salt/spm). If configured in both places, the spm file takes precedence. In general, these values will not need to be changed from the defaults. spm_logfile Default: /var/log/salt/spm Where SPM logs messages. spm_repos_config Default: /etc/salt/spm.repos SPM repositories are configured with this file. There is also a directory which corre‐ sponds to it, which ends in .d. For instance, if the filename is /etc/salt/spm.repos, the directory will be /etc/salt/spm.repos.d/. spm_cache_dir Default: /var/cache/salt/spm When SPM updates package repository metadata and downloads packaged, they will be placed in this directory. The package database, normally called packages.db, also lives in this directory. spm_db Default: /var/cache/salt/spm/packages.db The location and name of the package database. This database stores the names of all of the SPM packages installed on the system, the files that belong to them, and the metadata for those files. spm_build_dir Default: /srv/spm When packages are built, they will be placed in this directory. spm_build_exclude Default: ['.git'] When SPM builds a package, it normally adds all files in the formula directory to the package. Files listed here will be excluded from that package. This option requires a list to be specified. spm_build_exclude: - .git - .svn Types of Packages SPM supports different types of formula packages. The function of each package is denoted by its name. For instance, packages which end in -formula are considered to be Salt States (the most common type of formula). Packages which end in -conf contain configuration which is to be placed in the /etc/salt/ directory. Packages which do not contain one of these names are treated as if they have a -formula name. formula By default, most files from this type of package live in the /srv/spm/salt/ directory. The exception is the pillar.example file, which will be renamed to <package_name>.sls and placed in the pillar directory (/srv/spm/pillar/ by default). reactor By default, files from this type of package live in the /srv/spm/reactor/ directory. conf The files in this type of package are configuration files for Salt, which normally live in the /etc/salt/ directory. Configuration files for packages other than Salt can and should be handled with a Salt State (using a formula type of package). SPM Developmnent Guide This document discusses developing additional code for SPM. SPM-Specific Loader Modules SPM was designed to behave like traditional package managers, which apply files to the filesystem and store package metadata in a local database. However, because modern infra‐ structures often extend beyond those use cases, certain parts of SPM have been broken out into their own set of modules. Each function that accepts arguments has a set of required and optional arguments. Take note that SPM will pass all arguments in, and therefore each function must accept each of those arguments. However, arguments that are marked as required are crucial to SPM's core functionality, while arguments that are marked as optional are provided as a benefit to the module, if it needs to use them. Package Database By default, the package database is stored using the sqlite3 module. This module was cho‐ sen because support for SQLite3 is built into Python itself. Modules for managing the package database are stored in the salt/spm/pkgdb/ directory. A number of functions must exist to support database management. init() Get a database connection, and initialize the package database if necessary. This function accepts no arguments. If a database is used which supports a connection object, then that connection object is returned. For instance, the sqlite3 module returns a connect() object from the sqlite3 library: conn = sqlite3.connect(__opts__['spm_db'], isolation_level=None) ... return conn SPM itself will not use this connection object; it will be passed in as-is to the other functions in the module. Therefore, when you set up this object, make sure to do so in a way that is easily usable throughout the module. info() Return information for a package. This generally consists of the information that is stored in the FORMULA file in the package. The arguments that are passed in, in order, are package (required) and conn (optional). package is the name of the package, as specified in the FORMULA. conn is the connection object returned from init(). list_files() Return a list of files for an installed package. Only the filename should be returned, and no other information. The arguments that are passed in, in order, are package (required) and conn (optional). package is the name of the package, as specified in the FORMULA. conn is the connection object returned from init(). register_pkg() Register a package in the package database. Nothing is expected to be returned from this function. The arguments that are passed in, in order, are name (required), formula_def (required), and conn (optional). name is the name of the package, as specified in the FORMULA. formula_def is the contents of the FORMULA file, as a dict. conn is the connection object returned from init(). register_file() Register a file in the package database. Nothing is expected to be returned from this function. The arguments that are passed in are name (required), member (required), path (required), digest (optional), and conn (optional). name is the name of the package. member is a tarfile object for the package file. It is included, because it contains most of the information for the file. path is the location of the file on the local filesystem. digest is the SHA1 checksum of the file. conn is the connection object returned from init(). unregister_pkg() Unregister a package from the package database. This usually only involves removing the package's record from the database. Nothing is expected to be returned from this function. The arguments that are passed in, in order, are name (required) and conn (optional). name is the name of the package, as specified in the FORMULA. conn is the connection object returned from init(). unregister_file() Unregister a package from the package database. This usually only involves removing the package's record from the database. Nothing is expected to be returned from this function. The arguments that are passed in, in order, are name (required), pkg (optional) and conn (optional). name is the path of the file, as it was installed on the filesystem. pkg is the name of the package that the file belongs to. conn is the connection object returned from init(). db_exists() Check to see whether the package database already exists. This is the path to the package database file. This function will return True or False. The only argument that is expected is db_, which is the package database file. Package Files By default, package files are installed using the local module. This module applies files to the local filesystem, on the machine that the package is installed on. Modules for managing the package database are stored in the salt/spm/pkgfiles/ directory. A number of functions must exist to support file management. init() Initialize the installation location for the package files. Normally these will be direc‐ tory paths, but other external destinations such as databases can be used. For this rea‐ son, this function will return a connection object, which can be a database object. How‐ ever, in the default local module, this object is a dict containing the paths. This object will be passed into all other functions. Three directories are used for the destinations: formula_path, pillar_path, and reac‐ tor_path. formula_path is the location of most of the files that will be installed. The default is specific to the operating system, but is normally /srv/salt/. pillar_path is the location that the pillar.example file will be installed to. The default is specific to the operating system, but is normally /srv/pillar/. reactor_path is the location that reactor files will be installed to. The default is spe‐ cific to the operating system, but is normally /srv/reactor/. check_existing() Check the filesystem for existing files. All files for the package will be checked, and if any are existing, then this function will normally state that SPM will refuse to install the package. This function returns a list of the files that exist on the system. The arguments that are passed into this function are, in order: package (required), pkg_files (required), formula_def (formula_def), and conn (optional). package is the name of the package that is to be installed. pkg_files is a list of the files to be checked. formula_def is a copy of the information that is stored in the FORMULA file. conn is the file connection object. install_file() Install a single file to the destination (normally on the filesystem). Nothing is expected to be returned from this function. This function returns the final location that the file was installed to. The arguments that are passed into this function are, in order, package (required), for‐ mula_tar (required), member (required), formula_def (required), and conn (optional). package is the name of the package that is to be installed. formula_tar is the tarfile object for the package. This is passed in so that the function can call formula_tar.extract() for the file. member is the tarfile object which represents the individual file. This may be modified as necessary, before being passed into formula_tar.extract(). formula_def is a copy of the information from the FORMULA file. conn is the file connection object. remove_file() Remove a single file from file system. Normally this will be little more than an os.remove(). Nothing is expected to be returned from this function. The arguments that are passed into this function are, in order, path (required) and conn (optional). path is the absolute path to the file to be removed. conn is the file connection object. hash_file() Returns the hexdigest hash value of a file. The arguments that are passed into this function are, in order, path (required), hashobj (required), and conn (optional). path is the absolute path to the file. hashobj is a reference to hashlib.sha1(), which is used to pull the hexdigest() for the file. conn is the file connection object. This function will not generally be more complex than: def hash_file(path, hashobj, conn=None): with salt.utils.fopen(path, 'r') as f: hashobj.update(f.read()) return hashobj.hexdigest() path_exists() Check to see whether the file already exists on the filesystem. Returns True or False. This function expects a path argument, which is the absolute path to the file to be checked. path_isdir() Check to see whether the path specified is a directory. Returns True or False. This function expects a path argument, which is the absolute path to be checked.
SALT TRANSPORT One of fundamental features of Salt is remote execution. Salt has two basic "channels" for communicating with minions. Each channel requires a client (minion) and a server (master) implementation to work within Salt. These pairs of channels will work together to imple‐ ment the specific message passing required by the channel interface. Pub Channel The pub channel, or publish channel, is how a master sends a job (payload) to a minion. This is a basic pub/sub paradigm, which has specific targeting semantics. All data which goes across the publish system should be encrypted such that only members of the Salt cluster can decrypt the publishes. Req Channel The req channel is how the minions send data to the master. This interface is primarily used for fetching files and returning job returns. The req channels have two basic inter‐ faces when talking to the master. send is the basic method that guarantees the message is encrypted at least so that only minions attached to the same master can read it-- but no guarantee of minion-master confidentiality, wheras the crypted_transfer_decode_dictentry method does guarantee minion-master confidentiality. Zeromq Transport NOTE: Zeromq is the current default transport within Salt Zeromq is a messaging library with bindings into many languages. Zeromq implements a socket interface for message passing, with specific semantics for the socket type. Pub Channel The pub channel is implemented using zeromq's pub/sub sockets. By default we don't use zeromq's filtering, which means that all publish jobs are sent to all minions and filtered minion side. Zeromq does have publisher side filtering which can be enabled in salt using zmq_filtering. Req Channel The req channel is implemented using zeromq's req/rep sockets. These sockets enforce a send/recv pattern, which forces salt to serialize messages through these socket pairs. This means that although the interface is asynchronous on the minion we cannot send a sec‐ ond message until we have received the reply of the first message. TCP Transport The "tcp" transport is an implementation of Salt's channels using raw tcp sockets. Since this isn't using a pre-defined messaging library we will describe the wire protocol, mes‐ sage semantics, etc. in this document. Wire Protocol This implementation over TCP focuses on flexibility over absolute efficiency. This means we are okay to spend a couple of bytes of wire space for flexibility in the future. That being said, the wire framing is quite efficient and looks like: len(payload) msgpack({'head': SOMEHEADER, 'body': SOMEBODY}) The wire protocol is basically two parts, the length of the payload and a payload (which is a msgpack'd dict). Within that payload we have two items "head" and "body". Head con‐ tains header information (such as "message id"). The Body contains the actual message that we are sending. With this flexible wire protocol we can implement any message semantics that we'd like-- including multiplexed message passing on a single socket. Crypto The current implementation uses the same crypto as the zeromq transport. Pub Channel For the pub channel we send messages without "message ids" which the remote end interprets as a one-way send. NOTE: As of today we send all publishes to all minions and rely on minion-side filtering. Req Channel For the req channel we send messages with a "message id". This "message id" allows us to multiplex messages across the socket. The RAET Transport NOTE: The RAET transport is in very early development, it is functional but no promises are yet made as to its reliability or security. As for reliability and security, the encryption used has been audited and our tests show that raet is reliable. With this said we are still conducting more security audits and pushing the reliability. This document outlines the encryption used in RAET New in version 2014.7.0. The Reliable Asynchronous Event Transport, or RAET, is an alternative transport medium developed specifically with Salt in mind. It has been developed to allow queuing to happen up on the application layer and comes with socket layer encryption. It also abstracts a great deal of control over the socket layer and makes it easy to bubble up errors and exceptions. RAET also offers very powerful message routing capabilities, allowing for messages to be routed between processes on a single machine all the way up to processes on multiple machines. Messages can also be restricted, allowing processes to be sent messages of spe‐ cific types from specific sources allowing for trust to be established. Using RAET in Salt Using RAET in Salt is easy, the main difference is that the core dependencies change, instead of needing pycrypto, M2Crypto, ZeroMQ, and PYZMQ, the packages libsodium, libnacl, ioflo, and raet are required. Encryption is handled very cleanly by libnacl, while the queueing and flow control is handled by ioflo. Distribution packages are forthcoming, but libsodium can be easily installed from source, or many distributions do ship packages for it. The libnacl and ioflo packages can be easily installed from pypi, distribution pack‐ ages are in the works. Once the new deps are installed the 2014.7 release or higher of Salt needs to be installed. Once installed, modify the configuration files for the minion and master to set the trans‐ port to raet: /etc/salt/master: transport: raet /etc/salt/minion: transport: raet Now start salt as it would normally be started, the minion will connect to the master and share long term keys, which can then in turn be managed via salt-key. Remote execution and salt states will function in the same way as with Salt over ZeroMQ. Limitations The 2014.7 release of RAET is not complete! The Syndic and Multi Master have not been com‐ pleted yet and these are slated for completion in the 2015.5.0 release. Also, Salt-Raet allows for more control over the client but these hooks have not been implemented yet, thereforre the client still uses the same system as the ZeroMQ client. This means that the extra reliability that RAET exposes has not yet been implemented in the CLI client. Why? Customer and User Request Why make an alternative transport for Salt? There are many reasons, but the primary moti‐ vation came from customer requests, many large companies came with requests to run Salt over an alternative transport, the reasoning was varied, from performance and scaling improvements to licensing concerns. These customers have partnered with SaltStack to make RAET a reality. More Capabilities RAET has been designed to allow salt to have greater communication capabilities. It has been designed to allow for development into features which out ZeroMQ topologies can't match. Many of the proposed features are still under development and will be announced as they enter proof of concept phases, but these features include salt-fuse - a filesystem over salt, salt-vt - a parallel api driven shell over the salt transport and many others. RAET Reliability RAET is reliable, hence the name (Reliable Asynchronous Event Transport). The concern posed by some over RAET reliability is based on the fact that RAET uses UDP instead of TCP and UDP does not have built in reliability. RAET itself implements the needed reliability layers that are not natively present in UDP, this allows RAET to dynamically optimize packet delivery in a way that keeps it both reli‐ able and asynchronous. RAET and ZeroMQ When using RAET, ZeroMQ is not required. RAET is a complete networking replacement. It is noteworthy that RAET is not a ZeroMQ replacement in a general sense, the ZeroMQ constructs are not reproduced in RAET, but they are instead implemented in such a way that is spe‐ cific to Salt's needs. RAET is primarily an async communication layer over truly async connections, defaulting to UDP. ZeroMQ is over TCP and abstracts async constructs within the socket layer. Salt is not dropping ZeroMQ support and has no immediate plans to do so. Encryption RAET uses Dan Bernstein's NACL encryption libraries and CurveCP handshake. The libnacl python binding binds to both libsodium and tweetnacl to execute the underlying cryptogra‐ phy. This allows us to completely rely on an externally developed cryptography system. Programming Intro Intro to RAET Programming NOTE: This page is still under construction The first thing to cover is that RAET does not present a socket api, it presents, and queueing api, all messages in RAET are made available to via queues. This is the single most differentiating factor with RAET vs other networking libraries, instead of making a socket, a stack is created. Instead of calling send() or recv(), messages are placed on the stack to be sent and messages that are received appear on the stack. Different kinds of stacks are also available, currently two stacks exist, the UDP stack, and the UXD stack. The UDP stack is used to communicate over udp sockets, and the UXD stack is used to communicate over Unix Domain Sockets. The UDP stack runs a context for communicating over networks, while the UXD stack has con‐ texts for communicating between processes. UDP Stack Messages To create a UDP stack in RAET, simply create the stack, manage the queues, and process messages: from salt.transport.road.raet import stacking from salt.transport.road.raet import estating udp_stack = stacking.StackUdp(ha=('127.0.0.1', 7870)) r_estate = estating.Estate(stack=stack, name='foo', ha=('192.168.42.42', 7870)) msg = {'hello': 'world'} udp_stack.transmit(msg, udp_stack.estates[r_estate.name]) udp_stack.serviceAll()
WINDOWS SOFTWARE REPOSITORY NOTE: In 2015.8.0 and later, the Windows Software Repository cache is compiled on the Salt Minion, which enables pillar, grains and other things to be available during compila‐ tion time. To support this new functionality, a next-generation (ng) package repository was created. See See the Changes in Version 2015.8.0 for details. The SaltStack Windows Software Repository provides a package manager and software reposi‐ tory similar to what is provided by yum and apt on Linux. This repository enables the installation of software using the installers on remote Windows systems. In many senses, the operation is similar to that of the other package managers salt is aware of: · the pkg.installed and similar states work on Windows. · the pkg.install and similar module functions work on Windows. High level differences to yum and apt are: · The repository metadata (SLS files) is hosted through either salt or git. · Packages can be downloaded from within the salt repository, a git repository or from http(s) or ftp urls. · No dependencies are managed. Dependencies between packages needs to be managed manually. Requirements: · GitPython 0.3 or later, or pygit2 0.20.3 with libgit 0.20.0 or later installed on your Salt master. The Windows package definitions are downloaded and updated using Git. Configuration Populate the Repository The SLS files used to install Windows packages are not distributed by default with Salt. Run the following command to initialize the repository on your Salt master: salt-run winrepo.update_git_repos Sync Repo to Windows Minions Run pkg.refresh_db on each of your Windows minions to synchronize the package repository. salt -G 'os:windows' pkg.refresh_db Install Windows Software After completing the configuration steps, you are ready to manage software on your Windows minions. Show Installed Packages salt -G 'os:windows' pkg.list_pkgs Install a Package You can query the available version of a package using the Salt pkg module. salt winminion pkg.available_version firefox {'firefox': {'15.0.1': 'Mozilla Firefox 15.0.1 (x86 en-US)', '16.0.2': 'Mozilla Firefox 16.0.2 (x86 en-US)', '17.0.1': 'Mozilla Firefox 17.0.1 (x86 en-US)'}} As you can see, there are three versions of Firefox available for installation. You can refer a software package by its name or its full_name surround by single quotes. salt winminion pkg.install 'firefox' The above line will install the latest version of Firefox. salt winminion pkg.install 'firefox' version=16.0.2 The above line will install version 16.0.2 of Firefox. If a different version of the package is already installed it will be replaced with the version in the winrepo (only if the package itself supports live updating). You can also specify the full name: salt winminion pkg.install 'Mozilla Firefox 17.0.1 (x86 en-US)' Uninstall Windows Software Uninstall software using the pkg module: salt winminion pkg.remove firefox salt winminion pkg.purge firefox NOTE: pkg.purge just executes pkg.remove on Windows. At some point in the future pkg.purge may direct the installer to remove all configs and settings for software packages that support that option. Repository Location Salt maintains a repository of SLS files to install a large number of Windows packages: · 2015.8.0 and later minions: https://github.com/saltstack/salt-winrepo-ng · Earlier releases: https://github.com/saltstack/salt-winrepo By default, these repositories are mirrored to /srv/salt/win/repo_ng and /srv/salt/win/repo. This location can be changed in the master config file by setting the winrepo_dir_ng and winrepo_dir options. Maintaining Windows Repo Definitions in Git Repositories Windows software package definitions can be hosted in one or more Git repositories. The default repositories are hosted on GitHub by SaltStack. These include software definition files for various open source software projects. These software definition files are .sls files. There are two default repositories: salt-winrepo and salt-winrepo-ng. salt-winrepo contains software definition files for older minions (older than 2015.8.0). salt-win‐ repo-ng is for newer minions (2015.8.0 and newer). Each software definition file contains all the information salt needs to install that software on a minion including the HTTP or FTP locations of the installer files, required command-line switches for silent install, etc. Anyone is welcome to send a pull request to this repo to add new package definitions. The repos can be browsed here: salt-winrepo salt-winrepo-ng NOTE: The newer software definition files are run through the salt's parser which allows for the use of jinja. Configure which git repositories the master can search for package definitions by modify‐ ing or extending the winrepo_remotes and winrepo_remotes_ng options. IMPORTANT: winrepo_remotes was called win_gitrepos in Salt versions earlier than 2015.8.0 Package definitions are pulled down from the online repository by running the win‐ repo.update_git_repos runner. This command is run on the master: salt-run winrepo.update_git_repos This will pull down the software definition files for older minions (salt-winrepo) and new minions (salt-winrepo-ng). They are stored in the file_roots under win/repo/salt-winrepo and win/repo-ng/salt-winrepo-ng respectively. IMPORTANT: If you have customized software definition files that aren't maintained in a reposi‐ tory, those should be stored under win/repo for older minions and win/repo-ng for newer minions. The reason for this is that the contents of win/repo/salt-winrepo and win/repo-ng/salt-winrepo-ng are wiped out every time you run a win‐ repo.update_git_repos. Additionally, when you run winrepo.genrepo and pkg.refresh_db the entire contents under win/repo and win/repo-ng, to include all subdirectories, are used to create the msgpack file. The next step (if you have older minions) is to create the msgpack file for the repo (win‐ repo.p). This is done by running the winrepo.genrepo runner. This is also run on the mas‐ ter: salt-run winrepo.genrepo NOTE: If you have only 2015.8.0 and newer minions, you no longer need to run salt-run win‐ repo.genrepo on the master. Finally, you need to refresh the minion database by running the pkg.refresh_db command. This is run on the master as well: salt '*' pkg.refresh_db On older minions (older than 2015.8.0) this will copy the winrepo.p file down to the min‐ ion. On newer minions (2015.8.0 and newer) this will copy all the software definition files (.sls) down to the minion and then create the msgpack file (winrepo.p) locally. The reason this is done locally is because the jinja needs to be parsed using the minion's grains. IMPORTANT: Every time you modify the software definition files on the master, either by running salt-run winrepo.update_git_repos, modifying existing files, or by creating your own, you need to refresh the database on your minions. For older minions, that means running salt-run winrepo.genrepo and then salt '*' pkg.refresh_db. For newer minions (2015.8.0 and newer) it is just salt '*' pkg.refresh_db. NOTE: If the winrepo.genrepo or the pkg.refresh_db fails, it is likely a problem with the jinja in one of the software definition files. This will cause the operations to stop. You'll need to fix the syntax in order for the msgpack file to be created successfully. Creating a Package Definition SLS File The package definition file is a yaml file that contains all the information needed to install a piece of software using salt. It defines information about the package to include version, full name, flags required for the installer and uninstaller, whether or not to use the windows task scheduler to install the package, where to find the installa‐ tion package, etc. Take a look at this example for Firefox: firefox: '17.0.1': installer: 'salt://win/repo/firefox/English/Firefox Setup 17.0.1.exe' full_name: Mozilla Firefox 17.0.1 (x86 en-US) locale: en_US reboot: False install_flags: '-ms' uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe' uninstall_flags: '/S' '16.0.2': installer: 'salt://win/repo/firefox/English/Firefox Setup 16.0.2.exe' full_name: Mozilla Firefox 16.0.2 (x86 en-US) locale: en_US reboot: False install_flags: '-ms' uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe' uninstall_flags: '/S' '15.0.1': installer: 'salt://win/repo/firefox/English/Firefox Setup 15.0.1.exe' full_name: Mozilla Firefox 15.0.1 (x86 en-US) locale: en_US reboot: False install_flags: '-ms' uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe' uninstall_flags: '/S' Each software definition file begins with a package name for the software. As in the exam‐ ple above firefox. The next line is indented two spaces and contains the version to be defined. As in the example above, a software definition file can define multiple versions for the same piece of software. The lines following the version are indented two more spa‐ ces and contain all the information needed to install that package. WARNING: The package name and the full_name must be unique to all other packages in the software repository. The version line is the version for the package to be installed. It is used when you need to install a specific version of a piece of software. WARNING: The version must be enclosed in quotes, otherwise the yaml parser will remove trailing zeros. NOTE: There are unique situations where previous versions are unavailable. Take Google Chrome for example. There is only one url provided for a standalone installation of Google Chrome. (‐ https://dl.google.com/edgedl/chrome/install/GoogleChromeStandaloneEnterprise.msi) When a new version is released, the url just points to the new version. To handle situations such as these, set the version to latest. Salt will install the version of Chrome at the URL and report that version. Here's an example: chrome: latest: full_name: 'Google Chrome' installer: 'https://dl.google.com/edgedl/chrome/install/GoogleChromeStandaloneEnterpri ↲ se.msi' install_flags: '/qn /norestart' uninstaller: 'https://dl.google.com/edgedl/chrome/install/GoogleChromeStandaloneEnterp ↲ rise.msi' uninstall_flags: '/qn /norestart' msiexec: True locale: en_US reboot: False Available parameters are as follows: param str full_name The Full Name for the software as shown in "Programs and Features" in the control panel. You can also get this information by installing the package manually and then running pkg.list_pkgs. Here's an example of the output from pkg.list_pkgs: salt 'test-2008' pkg.list_pkgs test-2008 ---------- 7-Zip 9.20 (x64 edition): 9.20.00.0 Microsoft .NET Framework 4 Client Profile: 4.0.30319,4.0.30319 Microsoft .NET Framework 4 Extended: 4.0.30319,4.0.30319 Microsoft Visual C++ 2008 Redistributable - x64 9.0.21022: 9.0.21022 Mozilla Firefox 17.0.1 (x86 en-US): 17.0.1 Mozilla Maintenance Service: 17.0.1 NSClient++ (x64): 0.3.8.76 Notepad++: 6.4.2 Salt Minion 0.16.0: 0.16.0 Notice the Full Name for Firefox: Mozilla Firefox 17.0.0 (x86 en-US). That's exactly what's in the full_name parameter in the software definition file. If any of the software insalled on the machine matches one of the software definition files in the repository the full_name will be automatically renamed to the package name. The example below shows the pkg.list_pkgs for a machine that already has Mozilla Firefox 17.0.1 installed. test-2008: ---------- 7zip: 9.20.00.0 Microsoft .NET Framework 4 Client Profile: 4.0.30319,4.0.30319 Microsoft .NET Framework 4 Extended: 4.0.30319,4.0.30319 Microsoft Visual C++ 2008 Redistributable - x64 9.0.21022: 9.0.21022 Mozilla Maintenance Service: 17.0.1 Notepad++: 6.4.2 Salt Minion 0.16.0: 0.16.0 firefox: 17.0.1 nsclient: 0.3.9.328 IMPORTANT: The version number and full_name need to match the output from pkg.list_pkgs so that the status can be verified when running highstate. NOTE: It is still possible to successfully install packages using pkg.install even if they don't match. This can make troubleshooting difficult so be careful. param str installer The path to the .exe or .msi to use to install the package. This can be a path or a URL. If it is a URL or a salt path (salt://), the package will be cached locally and then executed. If it is a path to a file on disk or a file share, it will be executed directly. param str install_flags Any flags that need to be passed to the installer to make it perform a silent install. These can often be found by adding /? or /h when running the installer from the command-line. A great resource for finding these silent install flags can be found on the WPKG project's wiki: Salt will not return if the installer is waiting for user input so these are important. param str uninstaller The path to the program used to uninstall this software. This can be the path to the same exe or msi used to install the software. It can also be a GUID. You can find this value in the registry under the following keys: · Software\Microsoft\Windows\CurrentVersion\Uninstall · Software\Wow6432None\Microsoft\Windows\CurrentVersion\Uninstall param str uninstall_flags Any flags that need to be passed to the uninstaller to make it perform a silent uninstall. These can often be found by adding /? or /h when running the uninstaller from the command-line. A great resource for finding these silent install flags can be found on the WPKG project's wiki: Salt will not return if the uninstaller is waiting for user input so these are important. Here are some examples of installer and uninstaller settings: 7zip: '9.20.00.0': installer: salt://win/repo/7zip/7z920-x64.msi full_name: 7-Zip 9.20 (x64 edition) reboot: False install_flags: '/qn /norestart' msiexec: True uninstaller: '{23170F69-40C1-2702-0920-000001000000}' uninstall_flags: '/qn /norestart' Alternatively the uninstaller can also simply repeat the URL of the msi file. 7zip: '9.20.00.0': installer: salt://win/repo/7zip/7z920-x64.msi full_name: 7-Zip 9.20 (x64 edition) reboot: False install_flags: '/qn /norestart' msiexec: True uninstaller: salt://win/repo/7zip/7z920-x64.msi uninstall_flags: '/qn /norestart' param bool msiexec This tells salt to use msiexec /i to install the package and msiexec /x to unin‐ stall. This is for .msi installations. param bool allusers This parameter is specific to .msi installations. It tells msiexec to install the software for all users. The default is True. param bool cache_dir If true, the entire directory where the installer resides will be recursively cached. This is useful for installers that depend on other files in the same direc‐ tory for installation. NOTE: Only applies to salt: installer URLs. Here's an example for a software package that has dependent files: sqlexpress: '12.0.2000.8': installer: 'salt://win/repo/sqlexpress/setup.exe' full_name: Microsoft SQL Server 2014 Setup (English) reboot: False install_flags: '/ACTION=install /IACCEPTSQLSERVERLICENSETERMS /Q' cache_dir: True param bool use_scheduler If true, windows will use the task scheduler to run the installation. This is use‐ ful for running the salt installation itself as the installation process kills any currently running instances of salt. param bool reboot Not implemented param str local Not implemented Examples can be found at https://github.com/saltstack/salt-winrepo-ng Managing Windows Software on a Standalone Windows Minion The Windows Package Repository functions similar in a standalone environment, with a few differences in the configuration. To replace the winrepo runner that is used on the Salt master, an execution module exists to provide the same functionality to standalone minions. The functions are named the same as the ones in the runner, and are used in the same way; the only difference is that salt-call is used instead of salt-run: salt-call winrepo.update_git_repos salt-call winrepo.genrepo salt-call pkg.refresh_db After executing the previous commands the repository on the standalone system is ready to use. Custom Location for Repository SLS Files If file_roots has not been modified in the minion configuration, then no additional con‐ figuration needs to be added to the minion configuration. The winrepo.genrepo function from the winrepo execution module will by default look for the filename specified by win‐ repo_cachefile within C:\salt\srv\salt\win\repo. If the file_roots parameter has been modified, then winrepo_dir must be modified to fall within that path, at the proper relative path. For example, if the base environment in file_roots points to D:\foo, and winrepo_source_dir is salt://win/repo, then winrepo_dir must be set to D:\foo\win\repo to ensure that winrepo.genrepo puts the cachefile into right location. Config Options for Minions 2015.8.0 and Later The winrepo_source_dir config parameter (default: salt://win/repo) controls where pkg.refresh_db looks for the cachefile (default: winrepo.p). This means that the default location for the winrepo cachefile would be salt://win/repo/winrepo.p. Both win‐ repo_source_dir and winrepo_cachefile can be adjusted to match the actual location of this file on the Salt fileserver. Config Options for Minions Before 2015.8.0 If connected to a master, the minion will by default look for the winrepo cachefile (the file generated by the winrepo.genrepo runner) at salt://win/repo/winrepo.p. If the cachefile is in a different path on the salt fileserver, then win_repo_cachefile will need to be updated to reflect the proper location. Changes in Version 2015.8.0 Git repository management for the Windows Software Repository has changed in version 2015.8.0, and several master/minion config parameters have been renamed to make their nam‐ ing more consistent with each other. For a list of the winrepo config options, see here for master config options, and here for configuration options for masterless Windows minions. On the master, the winrepo.update_git_repos runner has been updated to use either pygit2 or GitPython to checkout the git repositories containing repo data. If pygit2 or GitPython is installed, existing winrepo git checkouts should be removed after upgrading to 2015.8.0, to allow them to be checked out again by running winrepo.update_git_repos. If neither GitPython nor pygit2 are installed, then Salt will fall back to the pre-exist‐ ing behavior for winrepo.update_git_repos, and a warning will be logged in the master log. NOTE: Standalone Windows minions do not support the new GitPython/pygit2 functionality, and will instead use the git.latest state to keep repositories up-to-date. More information on how to use the Windows Software Repo on a standalone minion can be found here. Config Parameters Renamed Many of the legacy winrepo configuration parameters have changed in version 2015.8.0 to make the naming more consistent. The old parameter names will still work, but a warning will be logged indicating that the old name is deprecated. Below are the parameters which have changed for version 2015.8.0: Master Config ┌─────────────────────────┬───────────────────┐ │Old Name │ New Name │ ├─────────────────────────┼───────────────────┤ │win_repo │ winrepo_dir │ ├─────────────────────────┼───────────────────┤ │win_repo_mastercachefile │ winrepo_cachefile │ ├─────────────────────────┼───────────────────┤ │win_gitrepos │ winrepo_remotes │ └─────────────────────────┴───────────────────┘ NOTE: winrepo_cachefile is no longer used by 2015.8.0 and later minions, and the winrepo_dir setting is replaced by winrepo_dir_ng for 2015.8.0 and later minions. See here for detailed information on all master config options for the Windows Repo. Minion Config ┌───────────────────┬───────────────────┐ │Old Name │ New Name │ ├───────────────────┼───────────────────┤ │win_repo │ winrepo_dir │ ├───────────────────┼───────────────────┤ │win_repo_cachefile │ winrepo_cachefile │ ├───────────────────┼───────────────────┤ │win_gitrepos │ winrepo_remotes │ └───────────────────┴───────────────────┘ See here for detailed information on all minion config options for the Windows Repo. pygit2/GitPython Support for Maintaining Git Repos The winrepo.update_git_repos runner (and the corresponding remote execution function for standalone minions) now makes use of the same underlying code used by the Git Fileserver Backend and Git External Pillar to maintain and update its local clones of git reposito‐ ries. If a compatible version of either pygit2 (0.20.3 and later) or GitPython (0.3.0 or later) is installed, then Salt will use it instead of the old method (which invokes the git.latest state). NOTE: If compatible versions of both pygit2 and GitPython are installed, then Salt will pre‐ fer pygit2, to override this behavior use the winrepo_provider configuration parameter: winrepo_provider: gitpython The winrepo execution module (discussed above in the Managing Windows Software on a Standalone Windows Minion section) does not yet officially support the new pygit2/‐ GitPython functionality, but if either pygit2 or GitPython is installed into Salt's bundled Python then it should work. However, it should be considered experimental at this time. To minimize potential issues, it is a good idea to remove any winrepo git repositories that were checked out by the old (pre-2015.8.0) winrepo code when upgrading the master to 2015.8.0 or later, and run winrepo.update_git_repos to clone them anew after the master is started. Additional added features include the ability to access authenticated git repositories (NOTE: pygit2 only), and to set per-remote config settings. An example of this would be the following: winrepo_remotes: - https://github.com/saltstack/salt-winrepo.git - @github.com:myuser/myrepo.git: - pubkey: /path/to/key.pub - privkey: /path/to/key - passphrase: myaw3s0m3pa$$phr4$3 - https://github.com/myuser/privaterepo.git: - user: mygithubuser - password: CorrectHorseBatteryStaple NOTE: Per-remote configuration settings work in the same fashion as they do in gitfs, with global parameters being overridden by their per-remote counterparts (for instance, set‐ ting winrepo_passphrase would set a global passphrase for winrepo that would apply to all SSH-based remotes, unless overridden by a passphrase per-remote parameter). See here for more a more in-depth explanation of how per-remote configuration works in gitfs, the same principles apply to winrepo. There are a couple other changes in how Salt manages git repos using pygit2/GitPython. First of all, a clean argument has been added to the winrepo.update_git_repos runner, which (if set to True) will tell the runner to dispose of directories under the win‐ repo_dir which are not explicitly configured. This prevents the need to manually remove these directories when a repo is removed from the config file. To clean these old directo‐ ries, just pass clean=True, like so: salt-run winrepo.update_git_repos clean=True However, if a mix of git and non-git Windows Repo definition files are being used, then this should not be used, as it will remove the directories containing non-git definitions. The other major change is that collisions between repo names are now detected, and the winrepo.update_git_repos runner will not proceed if any are detected. Consider the follow‐ ing configuration: winrepo_remotes: - https://foo.com/bar/baz.git - https://mydomain.tld/baz.git - https://github.com/foobar/baz The winrepo.update_git_repos runner will refuse to update repos here, as all three of these repos would be checked out to the same directory. To work around this, a per-remote parameter called name can be used to resolve these conflicts: winrepo_remotes: - https://foo.com/bar/baz.git - https://mydomain.tld/baz.git: - name: baz_junior - https://github.com/foobar/baz: - name: baz_the_third Troubleshooting Incorrect name/version If the package seems to install properly, but salt reports a failure then it is likely you have a version or full_name mismatch. Check the exact full_name and version used by the package. Use pkg.list_pkgs to check that the names and version exactly match what is installed. Changes to sls files not being picked up Ensure you have (re)generated the repository cache file (for older minions) and then updated the repository cache on the relevant minions: salt-run winrepo.genrepo salt winminion pkg.refresh_db Packages management under Windows 2003 On Windows server 2003, you need to install optional Windows component "wmi windows in‐ staller provider" to have full list of installed packages. If you don't have this, salt-minion can't report some installed software. How Success and Failure are Reported The install state/module function of the Windows package manager works roughly as follows: 1. Execute pkg.list_pkgs and store the result 2. Check if any action needs to be taken. (i.e. compare required package and version against pkg.list_pkgs results) 3. If so, run the installer command. 4. Execute pkg.list_pkgs and compare to the result stored from before installation. 5. Success/Failure/Changes will be reported based on the differences between the original and final pkg.list_pkgs results. If there are any problems in using the package manager it is likely due to the data in your sls files not matching the difference between the pre and post pkg.list_pkgs results. WINDOWS-SPECIFIC BEHAVIOUR Salt is capable of managing Windows systems, however due to various differences between the operating systems, there are some things you need to keep in mind. This document will contain any quirks that apply across Salt or generally across multiple module functions. Any Windows-specific behavior for particular module functions will be documented in the module function documentation. Therefore this document should be read in conjunction with the module function documentation. Group parameter for files Salt was originally written for managing Unix-based systems, and therefore the file module functions were designed around that security model. Rather than trying to shoehorn that model on to Windows, Salt ignores these parameters and makes non-applicable module func‐ tions unavailable instead. One of the commonly ignored parameters is the group parameter for managing files. Under Windows, while files do have a 'primary group' property, this is rarely used. It gener‐ ally has no bearing on permissions unless intentionally configured and is most commonly used to provide Unix compatibility (e.g. Services For Unix, NFS services). Because of this, any file module functions that typically require a group, do not under Windows. Attempts to directly use file module functions that operate on the group (e.g. file.chgrp) will return a pseudo-value and cause a log message to appear. No group parame‐ ters will be acted on. If you do want to access and change the 'primary group' property and understand the impli‐ cations, use the file.get_pgid or file.get_pgroup functions or the pgroup parameter on the file.chown module function. Dealing with case-insensitive but case-preserving names Windows is case-insensitive, but however preserves the case of names and it is this pre‐ served form that is returned from system functions. This causes some issues with Salt because it assumes case-sensitive names. These issues generally occur in the state func‐ tions and can cause bizarre looking errors. To avoid such issues, always pretend Windows is case-sensitive and use the right case for names, e.g. specify user=Administrator instead of user=administrator. Follow issue 11801 for any changes to this behavior. Dealing with various username forms Salt does not understand the various forms that Windows usernames can come in, e.g. user‐ name, mydomain\username, @mydomain.tld can all refer to the same user. In fact, Salt generally only considers the raw username value, i.e. the username without the domain or host information. Using these alternative forms will likely confuse Salt and cause odd errors to happen. Use only the raw username value in the correct case to avoid problems. Follow issue 11801 for any changes to this behavior. Specifying the None group Each Windows system has built-in _None_ group. This is the default 'primary group' for files for users not on a domain environment. Unfortunately, the word _None_ has special meaning in Python - it is a special value indi‐ cating 'nothing', similar to null or nil in other languages. To specify the None group, it must be specified in quotes, e.g. ./salt '*' file.chpgrp C:\path\to\file "'None'". Symbolic link loops Under Windows, if any symbolic link loops are detected or if there are too many levels of symlinks (defaults to 64), an error is always raised. For some functions, this behavior is different to the behavior on Unix platforms. In gen‐ eral, avoid symlink loops on either platform. Modifying security properties (ACLs) on files There is no support in Salt for modifying ACLs, and therefore no support for changing file permissions, besides modifying the owner/user.
SALT CLOUD Configuration Salt Cloud provides a powerful interface to interact with cloud hosts. This interface is tightly integrated with Salt, and new virtual machines are automatically connected to your Salt master after creation. Since Salt Cloud is designed to be an automated system, most configuration is done using the following YAML configuration files: · /etc/salt/cloud: The main configuration file, contains global settings that apply to all cloud hosts. See Salt Cloud Configuration. · /etc/salt/cloud.providers.d/*.conf: Contains settings that configure a specific cloud host, such as credentials, region settings, and so on. Since configuration varies sig‐ nificantly between each cloud host, a separate file should be created for each cloud host. In Salt Cloud, a provider is synonymous with a cloud host (Amazon EC2, Google Com‐ pute Engine, Rackspace, and so on). See Provider Specifics. · /etc/salt/cloud.profiles.d/*.conf: Contains settings that define a specific VM type. A profile defines the systems specs and image, and any other settings that are specific to this VM type. Each specific VM type is called a profile, and multiple profiles can be defined in a profile file. Each profile references a parent provider that defines the cloud host in which the VM is created (the provider settings are in the provider config‐ uration explained above). Based on your needs, you might define different profiles for web servers, database servers, and so on. See VM Profiles. Configuration Inheritance Configuration settings are inherited in order from the cloud config => providers => pro‐ file. [image] For example, if you wanted to use the same image for all virtual machines for a specific provider, the image name could be placed in the provider file. This value is inherited by all profiles that use that provider, but is overridden if a image name is defined in the profile. Most configuration settings can be defined in any file, the main difference being how that setting is inherited. QuickStart The Salt Cloud Quickstart walks you through defining a provider, a VM profile, and shows you how to create virtual machines using Salt Cloud. Using Salt Cloud salt-cloud Provision virtual machines in the cloud with Salt Synopsis salt-cloud -m /etc/salt/cloud.map salt-cloud -m /etc/salt/cloud.map NAME salt-cloud -m /etc/salt/cloud.map NAME1 NAME2 salt-cloud -p PROFILE NAME salt-cloud -p PROFILE NAME1 NAME2 NAME3 NAME4 NAME5 NAME6 Description Salt Cloud is the system used to provision virtual machines on various public clouds via a cleanly controlled profile and mapping system. Options --version Print the version of Salt that is running. --versions-report Show program's dependencies and version number, and then exit -h, --help Show the help message and exit -c CONFIG_DIR, --config-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the con‐ figuration files for Salt master and minions. The default location on most systems is /etc/salt. Execution Options -L LOCATION, --location=LOCATION Specify which region to connect to. -a ACTION, --action=ACTION Perform an action that may be specific to this cloud provider. This argument requires one or more instance names to be specified. -f <FUNC-NAME> <PROVIDER>, --function=<FUNC-NAME> <PROVIDER> Perform an function that may be specific to this cloud provider, that does not apply to an instance. This argument requires a provider to be specified (i.e.: nova). -p PROFILE, --profile=PROFILE Select a single profile to build the named cloud VMs from. The profile must be defined in the specified profiles file. -m MAP, --map=MAP Specify a map file to use. If used without any other options, this option will ensure that all of the mapped VMs are created. If the named VM already exists then it will be skipped. -H, --hard When specifying a map file, the default behavior is to ensure that all of the VMs specified in the map file are created. If the --hard option is set, then any VMs that exist on configured cloud providers that are not specified in the map file will be destroyed. Be advised that this can be a destructive operation and should be used with care. -d, --destroy Pass in the name(s) of VMs to destroy, salt-cloud will search the configured cloud providers for the specified names and destroy the VMs. Be advised that this is a destructive operation and should be used with care. Can be used in conjunction with the -m option to specify a map of VMs to be deleted. -P, --parallel Normally when building many cloud VMs they are executed serially. The -P option will run each cloud vm build in a separate process allowing for large groups of VMs to be build at once. Be advised that some cloud provider's systems don't seem to be well suited for this influx of vm creation. When creating large groups of VMs watch the cloud provider carefully. -u, --update-bootstrap Update salt-bootstrap to the latest develop version on GitHub. -y, --assume-yes Default yes in answer to all confirmation questions. -k, --keep-tmp Do not remove files from /tmp/ after deploy.sh finishes. --show-deploy-args Include the options used to deploy the minion in the data returned. --script-args=SCRIPT_ARGS Script arguments to be fed to the bootstrap script when deploying the VM. Query Options -Q, --query Execute a query and return some information about the nodes running on configured cloud providers -F, --full-query Execute a query and print out all available information about all cloud VMs. Can be used in conjunction with -m to display only information about the specified map. -S, --select-query Execute a query and print out selected information about all cloud VMs. Can be used in conjunction with -m to display only information about the specified map. --list-providers Display a list of configured providers. --list-profiles New in version 2014.7.0. Display a list of configured profiles. Pass in a cloud provider to view the provider's associated profiles, such as digital_ocean, or pass in all to list all the configured profiles. Cloud Providers Listings --list-locations=LIST_LOCATIONS Display a list of locations available in configured cloud providers. Pass the cloud provider that available locations are desired on, aka "linode", or pass "all" to list locations for all configured cloud providers --list-images=LIST_IMAGES Display a list of images available in configured cloud providers. Pass the cloud provider that available images are desired on, aka "linode", or pass "all" to list images for all configured cloud providers --list-sizes=LIST_SIZES Display a list of sizes available in configured cloud providers. Pass the cloud provider that available sizes are desired on, aka "AWS", or pass "all" to list sizes for all configured cloud providers Cloud Credentials --set-password=<USERNAME> <PROVIDER> Configure password for a cloud provider and save it to the keyring. PROVIDER can be specified with or without a driver, for example: "--set-password bob rackspace" or more specific "--set-password bob rackspace:openstack" DEPRECATED! Output Options --out Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters: grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data. If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module. NOTE: If using --out=json, you will probably want --static as well. Without the static option, you will get a separate JSON string per minion which makes JSON output invalid as a whole. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well. --out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation. --out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE Write the output to the specified file. --no-color Disable all colored output --force-color Force colored output NOTE: When using colored output the color codes are as follows: green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration. Examples To create 4 VMs named web1, web2, db1, and db2 from specified profiles: salt-cloud -p fedora_rackspace web1 web2 db1 db2 To read in a map file and create all VMs specified therein: salt-cloud -m /path/to/cloud.map To read in a map file and create all VMs specified therein in parallel: salt-cloud -m /path/to/cloud.map -P To delete any VMs specified in the map file: salt-cloud -m /path/to/cloud.map -d To delete any VMs NOT specified in the map file: salt-cloud -m /path/to/cloud.map -H To display the status of all VMs specified in the map file: salt-cloud -m /path/to/cloud.map -Q See also salt-cloud(7) salt(7) salt-master(1) salt-minion(1) Salt Cloud basic usage Salt Cloud needs, at least, one configured Provider and Profile to be functional. Creating a VM To create a VM with salt cloud, use command: salt-cloud -p <profile> name_of_vm Assuming there is a profile configured as following: fedora_rackspace: provider: my-rackspace-config image: Fedora 17 size: 256 server script: bootstrap-salt Then, the command to create new VM named fedora_http_01 is: salt-cloud -p fedora_rackspace fedora_http_01 Destroying a VM To destroy a created-by-salt-cloud VM, use command: salt-cloud -d name_of_vm For example, to delete the VM created on above example, use: salt-cloud -d fedora_http_01 VM Profiles Salt cloud designates virtual machines inside the profile configuration file. The profile configuration file defaults to /etc/salt/cloud.profiles and is a yaml configuration. The syntax for declaring profiles is simple: fedora_rackspace: provider: my-rackspace-config image: Fedora 17 size: 256 server script: bootstrap-salt It should be noted that the script option defaults to bootstrap-salt, and does not nor‐ mally need to be specified. Further examples in this document will not show the script option. A few key pieces of information need to be declared and can change based on the cloud provider. A number of additional parameters can also be inserted: centos_rackspace: provider: my-rackspace-config image: CentOS 6.2 size: 1024 server minion: master: salt.example.com append_domain: webs.example.com grains: role: webserver The image must be selected from available images. Similarly, sizes must be selected from the list of sizes. To get a list of available images and sizes use the following command: salt-cloud --list-images openstack salt-cloud --list-sizes openstack Some parameters can be specified in the main Salt cloud configuration file and then are applied to all cloud profiles. For instance if only a single cloud provider is being used then the provider option can be declared in the Salt cloud configuration file. Multiple Configuration Files In addition to /etc/salt/cloud.profiles, profiles can also be specified in any file match‐ ing cloud.profiles.d/*conf which is a sub-directory relative to the profiles configuration file(with the above configuration file as an example, /etc/salt/cloud.profiles.d/*.conf). This allows for more extensible configuration, and plays nicely with various configuration management tools as well as version control systems. Larger Example rhel_ec2: provider: my-ec2-config image: ami-e565ba8c size: t1.micro minion: cheese: edam ubuntu_ec2: provider: my-ec2-config image: ami-7e2da54e size: t1.micro minion: cheese: edam ubuntu_rackspace: provider: my-rackspace-config image: Ubuntu 12.04 LTS size: 256 server minion: cheese: edam fedora_rackspace: provider: my-rackspace-config image: Fedora 17 size: 256 server minion: cheese: edam cent_linode: provider: my-linode-config image: CentOS 6.2 64bit size: Linode 512 cent_gogrid: provider: my-gogrid-config image: 12834 size: 512MB cent_joyent: provider: my-joyent-config image: centos-6 size: Small 1GB Cloud Map File A number of options exist when creating virtual machines. They can be managed directly from profiles and the command line execution, or a more complex map file can be created. The map file allows for a number of virtual machines to be created and associated with specific profiles. Map files have a simple format, specify a profile and then a list of virtual machines to make from said profile: fedora_small: - web1 - web2 - web3 fedora_high: - redis1 - redis2 - redis3 cent_high: - riak1 - riak2 - riak3 This map file can then be called to roll out all of these virtual machines. Map files are called from the salt-cloud command with the -m option: $ salt-cloud -m /path/to/mapfile Remember, that as with direct profile provisioning the -P option can be passed to create the virtual machines in parallel: $ salt-cloud -m /path/to/mapfile -P NOTE: Due to limitations in the GoGrid API, instances cannot be provisioned in parallel with the GoGrid driver. Map files will work with GoGrid, but the -P argument should not be used on maps referencing GoGrid instances. A map file can also be enforced to represent the total state of a cloud deployment by using the --hard option. When using the hard option any vms that exist but are not speci‐ fied in the map file will be destroyed: $ salt-cloud -m /path/to/mapfile -P -H Be careful with this argument, it is very dangerous! In fact, it is so dangerous that in order to use it, you must explicitly enable it in the main configuration file. enable_hard_maps: True A map file can include grains and minion configuration options: fedora_small: - web1: minion: log_level: debug grains: cheese: tasty omelet: du fromage - web2: minion: log_level: warn grains: cheese: more tasty omelet: with peppers A map file may also be used with the various query options: $ salt-cloud -m /path/to/mapfile -Q {'ec2': {'web1': {'id': 'i-e6aqfegb', 'image': None, 'private_ips': [], 'public_ips': [], 'size': None, 'state': 0}}, 'web2': {'Absent'}} ...or with the delete option: $ salt-cloud -m /path/to/mapfile -d The following virtual machines are set to be destroyed: web1 web2 Proceed? [N/y] WARNING: Specifying Nodes with Maps on the Command Line Specifying the name of a node or nodes with the maps options on the command line is not supported. This is especially impor‐ tant to remember when using --destroy with maps; salt-cloud will ignore any arguments passed in which are not directly relevant to the map file. When using ``--destroy`` with a map, every node in the map file will be deleted! Maps don't provide any useful information for destroying individual nodes, and should not be used to destroy a subset of a map. Setting up New Salt Masters Bootstrapping a new master in the map is as simple as: fedora_small: - web1: make_master: True - web2 - web3 Notice that ALL bootstrapped minions from the map will answer to the newly created salt-master. To make any of the bootstrapped minions answer to the bootstrapping salt-master as opposed to the newly created salt-master, as an example: fedora_small: - web1: make_master: True minion: master: <the local master ip address> local_master: True - web2 - web3 The above says the minion running on the newly created salt-master responds to the local master, ie, the master used to bootstrap these VMs. Another example: fedora_small: - web1: make_master: True - web2 - web3: minion: master: <the local master ip address> local_master: True The above example makes the web3 minion answer to the local master, not the newly created master. Cloud Actions Once a VM has been created, there are a number of actions that can be performed on it. The "reboot" action can be used across all providers, but all other actions are specific to the cloud provider. In order to perform an action, you may specify it from the command line, including the name(s) of the VM to perform the action on: $ salt-cloud -a reboot vm_name $ salt-cloud -a reboot vm1 vm2 vm2 Or you may specify a map which includes all VMs to perform the action on: $ salt-cloud -a reboot -m /path/to/mapfile The following is a list of actions currently supported by salt-cloud: all providers: - reboot ec2: - start - stop joyent: - stop linode: - start - stop Another useful reference for viewing more salt-cloud actions is the :ref:Salt Cloud Fea‐ ture Matrix <salt-cloud-feature-matrix> Cloud Functions Cloud functions work much the same way as cloud actions, except that they don't perform an operation on a specific instance, and so do not need a machine name to be specified. How‐ ever, since they perform an operation on a specific cloud provider, that provider must be specified. $ salt-cloud -f show_image ec2 image=ami-fd20ad94 There are three universal salt-cloud functions that are extremely useful for gathering information about instances on a provider basis: · list_nodes: Returns some general information about the instances for the given provider. · list_nodes_full: Returns all information about the instances for the given provider. · list_nodes_select: Returns select information about the instances for the given provider. $ salt-cloud -f list_nodes linode $ salt-cloud -f list_nodes_full linode $ salt-cloud -f list_nodes_select linode Another useful reference for viewing salt-cloud functions is the :ref:Salt Cloud Feature Matrix <salt-cloud-feature-matrix> Core Configuration Install Salt Cloud Salt Cloud is now part of Salt proper. It was merged in as of Salt version 2014.1.0. On Ubuntu, install Salt Cloud by using following command: sudo add-apt-repository ppa:saltstack/salt sudo apt-get update sudo apt-get install salt-cloud If using Salt Cloud on OS X, curl-ca-bundle must be installed. Presently, this package is not available via brew, but it is available using MacPorts: sudo port install curl-ca-bundle Salt Cloud depends on apache-libcloud. Libcloud can be installed via pip with pip install apache-libcloud. Installing Salt Cloud for development Installing Salt for development enables Salt Cloud development as well, just make sure apache-libcloud is installed as per above paragraph. See these instructions: Installing Salt for development. Core Configuration A number of core configuration options and some options that are global to the VM profiles can be set in the cloud configuration file. By default this file is located at /etc/salt/cloud. Thread Pool Size When salt cloud is operating in parallel mode via the -P argument, you can control the thread pool size by specifying the pool_size parameter with a positive integer value. By default, the thread pool size will be set to the number of VMs that salt cloud is oper‐ ating on. pool_size: 10 Minion Configuration The default minion configuration is set up in this file. Minions created by salt-cloud derive their configuration from this file. Almost all parameters found in Configuring the Salt Minion can be used here. minion: master: saltmaster.example.com In particular, this is the location to specify the location of the salt master and its listening port, if the port is not set to the default. Similar to most other settings, Minion configuration settings are inherited across config‐ uration files. For example, the master setting might be contained in the main cloud con‐ figuration file as demonstrated above, but additional settings can be placed in the provider or profile: ec2-web: size: t1.micro minion: environment: test startup_states: sls sls_list: - web Cloud Configuration Syntax The data specific to interacting with public clouds is set up here. Cloud provider configuration settings can live in several places. The first is in /etc/salt/cloud: # /etc/salt/cloud providers: my-aws-migrated-config: id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: quick-start private_key: /root/test.pem driver: ec2 Cloud provider configuration data can also be housed in /etc/salt/cloud.providers or any file matching /etc/salt/cloud.providers.d/*.conf. All files in any of these locations will be parsed for cloud provider data. Using the example configuration above: # /etc/salt/cloud.providers # or could be /etc/salt/cloud.providers.d/*.conf my-aws-config: id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: quick-start private_key: /root/test.pem driver: ec2 NOTE: Salt Cloud provider configurations within /etc/cloud.provider.d/ should not specify the ``providers starting key. It is also possible to have multiple cloud configuration blocks within the same alias block. For example: production-config: - id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: quick-start private_key: /root/test.pem driver: ec2 - user: example_user apikey: 123984bjjas87034 driver: rackspace However, using this configuration method requires a change with profile configuration blocks. The provider alias needs to have the provider key value appended as in the fol‐ lowing example: rhel_aws_dev: provider: production-config:ec2 image: ami-e565ba8c size: t1.micro rhel_aws_prod: provider: production-config:ec2 image: ami-e565ba8c size: High-CPU Extra Large Instance database_prod: provider: production-config:rackspace image: Ubuntu 12.04 LTS size: 256 server Notice that because of the multiple entries, one has to be explicit about the provider alias and name, from the above example, production-config: ec2. This data interactions with the salt-cloud binary regarding its --list-location, --list-images, and --list-sizes which needs a cloud provider as an argument. The argument used should be the configured cloud provider alias. If the provider alias has multiple entries, <provider-alias>: <provider-name> should be used. To allow for a more extensible configuration, --providers-config, which defaults to /etc/salt/cloud.providers, was added to the cli parser. It allows for the providers' con‐ figuration to be added on a per-file basis. Pillar Configuration It is possible to configure cloud providers using pillars. This is only used when inside the cloud module. You can setup a variable called cloud that contains your profile and provider to pass that information to the cloud servers instead of having to copy the full configuration to every minion. In your pillar file, you would use something like this: cloud: ssh_key_name: saltstack ssh_key_file: /root/.ssh/id_rsa update_cachedir: True diff_cache_events: True change_password: True providers: my-nova: identity_url: https://identity.api.rackspacecloud.com/v2.0/ compute_region: IAD user: myuser api_key: apikey tenant: 123456 driver: nova my-openstack: identity_url: https://identity.api.rackspacecloud.com/v2.0/tokens user: user2 apikey: apikey2 tenant: 654321 compute_region: DFW driver: openstack compute_name: cloudServersOpenStack profiles: ubuntu-nova: provider: my-nova size: performance1-8 image: bb02b1a3-bc77-4d17-ab5b-421d89850fca script_args: git develop ubuntu-openstack: provider: my-openstack size: performance1-8 image: bb02b1a3-bc77-4d17-ab5b-421d89850fca script_args: git develop Cloud Configurations Scaleway To use Salt Cloud with Scaleway, you need to get an access key and an API token. API tokens are unique identifiers associated with your Scaleway account. To retrieve your access key and API token, log-in to the Scaleway control panel, open the pull-down menu on your account name and click on "My Credentials" link. If you do not have API token you can create one by clicking the "Create New Token" button on the right corner. my-scaleway-config: access_key: 15cf404d-4560-41b1-9a0c-21c3d5c4ff1f token: a7347ec8-5de1-4024-a5e3-24b77d1ba91d driver: scaleway NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-scaleway-config. Rackspace Rackspace cloud requires two configuration options; a user and an apikey: my-rackspace-config: user: example_user apikey: 123984bjjas87034 driver: rackspace NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-rackspace-config. Amazon AWS A number of configuration options are required for Amazon AWS including id, key, keyname, securitygroup, and private_key: my-aws-quick-start: id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: quick-start private_key: /root/test.pem driver: ec2 my-aws-default: id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: default private_key: /root/test.pem driver: ec2 NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be either provider: my-aws-quick-start or provider: my-aws-default. Linode Linode requires a single API key, but the default root password also needs to be set: my-linode-config: apikey: asldkgfakl;sdfjsjaslfjaklsdjf;askldjfaaklsjdfhasldsadfghdkf password: F00barbaz ssh_pubkey: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKHEOLLbeXgaqRQT9NBAopVz366SdYc0KKX33vAn ↲ q+2R user@host ssh_key_file: ~/.ssh/id_ed25519 driver: linode The password needs to be 8 characters and contain lowercase, uppercase, and numbers. NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-linode-config Joyent Cloud The Joyent cloud requires three configuration parameters: The username and password that are used to log into the Joyent system, as well as the location of the private SSH key associated with the Joyent account. The SSH key is needed to send the provisioning com‐ mands up to the freshly created virtual machine. my-joyent-config: user: fred password: saltybacon private_key: /root/joyent.pem driver: joyent NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-joyent-config GoGrid To use Salt Cloud with GoGrid, log into the GoGrid web interface and create an API key. Do this by clicking on "My Account" and then going to the API Keys tab. The apikey and the sharedsecret configuration parameters need to be set in the configura‐ tion file to enable interfacing with GoGrid: my-gogrid-config: apikey: asdff7896asdh789 sharedsecret: saltybacon driver: gogrid NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-gogrid-config. OpenStack OpenStack configuration differs between providers, and at the moment several options need to be specified. This module has been officially tested against the HP and the Rackspace implementations, and some examples are provided for both. # For HP my-openstack-hp-config: identity_url: 'https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/' compute_name: Compute compute_region: 'az-1.region-a.geo-1' tenant: myuser-tenant1 user: myuser ssh_key_name: mykey ssh_key_file: '/etc/salt/hpcloud/mykey.pem' password: mypass driver: openstack # For Rackspace my-openstack-rackspace-config: identity_url: 'https://identity.api.rackspacecloud.com/v2.0/tokens' compute_name: cloudServersOpenStack protocol: ipv4 compute_region: DFW protocol: ipv4 user: myuser tenant: 5555555 password: mypass driver: openstack If you have an API key for your provider, it may be specified instead of a password: my-openstack-hp-config: apikey: 901d3f579h23c8v73q9 my-openstack-rackspace-config: apikey: 901d3f579h23c8v73q9 NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be either provider: my-openstack-hp-config or provider: my-open‐ stack-rackspace-config. You will certainly need to configure the user, tenant, and either password or apikey. If your OpenStack instances only have private IP addresses and a CIDR range of private addresses are not reachable from the salt-master, you may set your preference to have Salt ignore it: my-openstack-config: ignore_cidr: 192.168.0.0/16 For in-house OpenStack Essex installation, libcloud needs the service_type : my-openstack-config: identity_url: 'http://control.openstack.example.org:5000/v2.0/' compute_name : Compute Service service_type : compute DigitalOcean Using Salt for DigitalOcean requires a client_key and an api_key. These can be found in the DigitalOcean web interface, in the "My Settings" section, under the API Access tab. my-digitalocean-config: driver: digital_ocean personal_access_token: xxx location: New York 1 NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-digital-ocean-config. Parallels Using Salt with Parallels requires a user, password and URL. These can be obtained from your cloud provider. my-parallels-config: user: myuser password: xyzzy url: https://api.cloud.xmission.com:4465/paci/v1.0/ driver: parallels NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-parallels-config. Proxmox Using Salt with Proxmox requires a user, password, and URL. These can be obtained from your cloud host. Both PAM and PVE users can be used. my-proxmox-config: driver: proxmox user: saltcloud@pve password: xyzzy url: your.proxmox.host NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-proxmox-config. LXC The lxc driver uses saltify to install salt and attach the lxc container as a new lxc min‐ ion. As soon as we can, we manage baremetal operation over SSH. You can also destroy those containers via this driver. devhost10-lxc: target: devhost10 driver: lxc And in the map file: devhost10-lxc: provider: devhost10-lxc from_container: ubuntu backing: lvm sudo: True size: 3g ip: 10.0.3.9 minion: master: 10.5.0.1 master_port: 4506 lxc_conf: - lxc.utsname: superlxc NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: devhost10-lxc. Saltify The Saltify driver is a new, experimental driver designed to install Salt on a remote machine, virtual or bare metal, using SSH. This driver is useful for provisioning machines which are already installed, but not Salted. For more information about using this driver and for configuration examples, please see the Gettting Started with Saltify documenta‐ tion. Extending Profiles and Cloud Providers Configuration As of 0.8.7, the option to extend both the profiles and cloud providers configuration and avoid duplication was added. The extends feature works on the current profiles configura‐ tion, but, regarding the cloud providers configuration, only works in the new syntax and respective configuration files, i.e. /etc/salt/salt/cloud.providers or /etc/salt/cloud.providers.d/*.conf. NOTE: Extending cloud profiles and providers is not recursive. For example, a profile that is extended by a second profile is possible, but the second profile cannot be extended by a third profile. Also, if a profile (or provider) is extending another profile and each contains a list of values, the lists from the extending profile will override the list from the origi‐ nal profile. The lists are not merged together. Extending Profiles Some example usage on how to use extends with profiles. Consider /etc/salt/salt/cloud.pro‐ files containing: development-instances: provider: my-ec2-config size: t1.micro ssh_username: ec2_user securitygroup: - default deploy: False Amazon-Linux-AMI-2012.09-64bit: image: ami-54cf5c3d extends: development-instances Fedora-17: image: ami-08d97e61 extends: development-instances CentOS-5: provider: my-aws-config image: ami-09b61d60 extends: development-instances The above configuration, once parsed would generate the following profiles data: [{'deploy': False, 'image': 'ami-08d97e61', 'profile': 'Fedora-17', 'provider': 'my-ec2-config', 'securitygroup': ['default'], 'size': 't1.micro', 'ssh_username': 'ec2_user'}, {'deploy': False, 'image': 'ami-09b61d60', 'profile': 'CentOS-5', 'provider': 'my-aws-config', 'securitygroup': ['default'], 'size': 't1.micro', 'ssh_username': 'ec2_user'}, {'deploy': False, 'image': 'ami-54cf5c3d', 'profile': 'Amazon-Linux-AMI-2012.09-64bit', 'provider': 'my-ec2-config', 'securitygroup': ['default'], 'size': 't1.micro', 'ssh_username': 'ec2_user'}, {'deploy': False, 'profile': 'development-instances', 'provider': 'my-ec2-config', 'securitygroup': ['default'], 'size': 't1.micro', 'ssh_username': 'ec2_user'}] Pretty cool right? Extending Providers Some example usage on how to use extends within the cloud providers configuration. Con‐ sider /etc/salt/salt/cloud.providers containing: my-develop-envs: - id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: quick-start private_key: /root/test.pem location: ap-southeast-1 availability_zone: ap-southeast-1b driver: ec2 - user: @mycorp.com password: mypass ssh_key_name: mykey ssh_key_file: '/etc/salt/ibm/mykey.pem' location: Raleigh driver: ibmsce my-productions-envs: - extends: my-develop-envs:ibmsce user: @mycorp.com location: us-east-1 availability_zone: us-east-1 The above configuration, once parsed would generate the following providers data: 'providers': { 'my-develop-envs': [ {'availability_zone': 'ap-southeast-1b', 'id': 'HJGRYCILJLKJYG', 'key': 'kdjgfsgm;woormgl/aserigjksjdhasdfgn', 'keyname': 'test', 'location': 'ap-southeast-1', 'private_key': '/root/test.pem', 'driver': 'aws', 'securitygroup': 'quick-start' }, {'location': 'Raleigh', 'password': 'mypass', 'driver': 'ibmsce', 'ssh_key_file': '/etc/salt/ibm/mykey.pem', 'ssh_key_name': 'mykey', 'user': '@mycorp.com' } ], 'my-productions-envs': [ {'availability_zone': 'us-east-1', 'location': 'us-east-1', 'password': 'mypass', 'driver': 'ibmsce', 'ssh_key_file': '/etc/salt/ibm/mykey.pem', 'ssh_key_name': 'mykey', 'user': '@mycorp.com' } ] } Windows Configuration Spinning up Windows Minions It is possible to use Salt Cloud to spin up Windows instances, and then install Salt on them. This functionality is available on all cloud providers that are supported by Salt Cloud. However, it may not necessarily be available on all Windows images. Requirements Salt Cloud makes use of impacket and winexe to set up the Windows Salt Minion installer. impacket is usually available as either the impacket or the python-impacket package, depending on the distribution. More information on impacket can be found at the project home: · impacket project home winexe is less commonly available in distribution-specific repositories. However, it is currently being built for various distributions in 3rd party channels: · RPMs at pbone.net · OpenSuse Build Service Optionally WinRM can be used instead of winexe if the python module pywinrm is available and WinRM is supported on the target Windows version. Information on pywinrm can be found at the project home: · pywinrm project home Additionally, a copy of the Salt Minion Windows installer must be present on the system on which Salt Cloud is running. This installer may be downloaded from saltstack.com: · SaltStack Download Area Firewall Settings Because Salt Cloud makes use of smbclient and winexe, port 445 must be open on the target image. This port is not generally open by default on a standard Windows distribution, and care must be taken to use an image in which this port is open, or the Windows firewall is disabled. If supported by the cloud provider, a PowerShell script may be used to open up this port automatically, using the cloud provider's userdata. The following script would open up port 445, and apply the changes: <powershell> New-NetFirewallRule -Name "SMB445" -DisplayName "SMB445" -Protocol TCP -LocalPort 445 Set-Item (dir wsman:\localhost\Listener\*\Port -Recurse).pspath 445 -Force Restart-Service winrm </powershell> For EC2, this script may be saved as a file, and specified in the provider or profile con‐ figuration as userdata_file. For instance: userdata_file: /etc/salt/windows-firewall.ps1 If you are using WinRM on EC2 the HTTPS port for the WinRM service must also be enabled in your userdata. By default EC2 Windows images only have insecure HTTP enabled. To enable HTTPS and basic authentication required by pywinrm consider the following userdata exam‐ ple: <powershell> New-NetFirewallRule -Name "SMB445" -DisplayName "SMB445" -Protocol TCP -LocalPort 445 New-NetFirewallRule -Name "WINRM5986" -DisplayName "WINRM5986" -Protocol TCP -LocalPort 59 ↲ 86 winrm quickconfig -q winrm set winrm/config/winrs '@{MaxMemoryPerShellMB="300"}' winrm set winrm/config '@{MaxTimeoutms="1800000"}' winrm set winrm/config/service/auth '@{Basic="true"}' $SourceStoreScope = 'LocalMachine' $SourceStorename = 'Remote Desktop' $SourceStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Sto ↲ re -ArgumentList $SourceStorename, $SourceStoreScope $SourceStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadOnly) $cert = $SourceStore.Certificates | Where-Object -FilterScript { $_.subject -like '*' } $DestStoreScope = 'LocalMachine' $DestStoreName = 'My' $DestStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store ↲ -ArgumentList $DestStoreName, $DestStoreScope $DestStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadWrite) $DestStore.Add($cert) $SourceStore.Close() $DestStore.Close() winrm create winrm/config/listener?Address=*+Transport=HTTPS `@`{Hostname=`"($certId)`"`; ↲ CertificateThumbprint=`"($cert.Thumbprint)`"`} Restart-Service winrm </powershell> No certificate store is available by default on EC2 images and creating one does not seem possible without an MMC (cannot be automated). To use the default EC2 Windows images the above copies the RDP store. Configuration Configuration is set as usual, with some extra configuration settings. The location of the Windows installer on the machine that Salt Cloud is running on must be specified. This may be done in any of the regular configuration files (main, providers, profiles, maps). For example: Setting the installer in /etc/salt/cloud.providers: my-softlayer: driver: softlayer user: MYUSER1138 apikey: 'e3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9' minion: master: saltmaster.example.com win_installer: /root/Salt-Minion-2014.7.0-AMD64-Setup.exe win_username: Administrator win_password: letmein smb_port: 445 The default Windows user is Administrator, and the default Windows password is blank. If WinRM is to be used use_winrm needs to be set to True. winrm_port can be used to spec‐ ify a custom port (must be HTTPS listener). Auto-Generated Passwords on EC2 On EC2, when the win_password is set to auto, Salt Cloud will query EC2 for an auto-gener‐ ated password. This password is expected to take at least 4 minutes to generate, adding additional time to the deploy process. When the EC2 API is queried for the auto-generated password, it will be returned in a mes‐ sage encrypted with the specified keyname. This requires that the appropriate private_key file is also specified. Such a profile configuration might look like: windows-server-2012: provider: my-ec2-config image: ami-c49c0dac size: m1.small securitygroup: windows keyname: mykey private_key: /root/mykey.pem userdata_file: /etc/salt/windows-firewall.ps1 win_installer: /root/Salt-Minion-2014.7.0-AMD64-Setup.exe win_username: Administrator win_password: auto Cloud Provider Specifics Getting Started With Aliyun ECS The Aliyun ECS (Elastic Computer Service) is one of the most popular public cloud hosts in China. This cloud host can be used to manage aliyun instance using salt-cloud. http://www.aliyun.com/ Dependencies This driver requires the Python requests library to be installed. Configuration Using Salt for Aliyun ECS requires aliyun access key id and key secret. These can be found in the aliyun web interface, in the "User Center" section, under "My Service" tab. # Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my-aliyun-config: # aliyun Access Key ID id: wDGEwGregedg3435gDgxd # aliyun Access Key Secret key: GDd45t43RDBTrkkkg43934t34qT43t4dgegerGEgg location: cn-qingdao driver: aliyun NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Profiles Cloud Profiles Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.pro‐ files.d/ directory: aliyun_centos: provider: my-aliyun-config size: ecs.t1.small location: cn-qingdao securitygroup: G1989096784427999 image: centos6u3_64_20G_aliaegis_20130816.vhd Sizes can be obtained using the --list-sizes option for the salt-cloud command: # salt-cloud --list-sizes my-aliyun-config my-aliyun-config: ---------- aliyun: ---------- ecs.c1.large: ---------- CpuCoreCount: 8 InstanceTypeId: ecs.c1.large MemorySize: 16.0 ...SNIP... Images can be obtained using the --list-images option for the salt-cloud command: # salt-cloud --list-images my-aliyun-config my-aliyun-config: ---------- aliyun: ---------- centos5u8_64_20G_aliaegis_20131231.vhd: ---------- Architecture: x86_64 Description: ImageId: centos5u8_64_20G_aliaegis_20131231.vhd ImageName: CentOS 5.8 64位 ImageOwnerAlias: system ImageVersion: 1.0 OSName: CentOS 5.8 64位 Platform: CENTOS5 Size: 20 Visibility: public ...SNIP... Locations can be obtained using the --list-locations option for the salt-cloud command: my-aliyun-config: ---------- aliyun: ---------- cn-beijing: ---------- LocalName: 北京 RegionId: cn-beijing cn-hangzhou: ---------- LocalName: 杭州 RegionId: cn-hangzhou cn-hongkong: ---------- LocalName: 香港 RegionId: cn-hongkong cn-qingdao: ---------- LocalName: 青岛 RegionId: cn-qingdao Security Group can be obtained using the -f list_securitygroup option for the salt-cloud command: # salt-cloud --location=cn-qingdao -f list_securitygroup my-aliyun-config my-aliyun-config: ---------- aliyun: ---------- G1989096784427999: ---------- Description: G1989096784427999 SecurityGroupId: G1989096784427999 NOTE: Aliyun ECS REST API documentation is available from Aliyun ECS API. Getting Started With Azure New in version 2014.1.0. Azure is a cloud service by Microsoft providing virtual machines, SQL services, media ser‐ vices, and more. This document describes how to use Salt Cloud to create a virtual machine on Azure, with Salt installed. More information about Azure is located at http://www.windowsazure.com/. Dependencies · The Azure Python SDK >= 0.10.2 and < 1.0.0 · The python-requests library, for Python < 2.7.9. · A Microsoft Azure account · OpenSSL (to generate the certificates) · Salt NOTE: The Azure driver is currently being updated to work with the new version of the Python Azure SDK, 1.0.0. However until that process is complete, this driver will not work with Azure 1.0.0. Please be sure you're running on a minimum version of 0.10.2 and less than version 1.0.0. See Issue #27980 for more information. Configuration Set up the provider config at /etc/salt/cloud.providers.d/azure.conf: # Note: This example is for /etc/salt/cloud.providers.d/azure.conf my-azure-config: driver: azure subscription_id: 3287abc8-f98a-c678-3bde-326766fd3617 certificate_path: /etc/salt/azure.pem # Set up the location of the salt master # minion: master: saltmaster.example.com # Optional management_host: management.core.windows.net The certificate used must be generated by the user. OpenSSL can be used to create the man‐ agement certificates. Two certificates are needed: a .cer file, which is uploaded to Azure, and a .pem file, which is stored locally. To create the .pem file, execute the following command: openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout /etc/salt/azure.pem -out /etc/ ↲ salt/azure.pem To create the .cer file, execute the following command: openssl x509 -inform pem -in /etc/salt/azure.pem -outform der -out /etc/salt/azure.cer After creating these files, the .cer file will need to be uploaded to Azure via the "Upload a Management Certificate" action of the "Management Certificates" tab within the "Settings" section of the management portal. Optionally, a management_host may be configured, if necessary for the region. NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Cloud Profiles Set up an initial profile at /etc/salt/cloud.profiles: azure-ubuntu: provider: my-azure-config image: 'b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-12_04_3-LTS-amd64-server-20131003-en-us ↲ -30GB' size: Small location: 'East US' ssh_username: azureuser ssh_password: verybadpass slot: production media_link: 'http://portalvhdabcdefghijklmn.blob.core.windows.net/vhds' virtual_network_name: azure-virtual-network subnet_name: azure-subnet These options are described in more detail below. Once configured, the profile can be realized with a salt command: salt-cloud -p azure-ubuntu newinstance This will create an salt minion instance named newinstance in Azure. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: salt newinstance test.ping Profile Options The following options are currently available for Azure. provider The name of the provider as configured in /etc/salt/cloud.providers.d/azure.conf. image The name of the image to use to create a VM. Available images can be viewed using the fol‐ lowing command: salt-cloud --list-images my-azure-config size The name of the size to use to create a VM. Available sizes can be viewed using the fol‐ lowing command: salt-cloud --list-sizes my-azure-config location The name of the location to create a VM in. Available locations can be viewed using the following command: salt-cloud --list-locations my-azure-config affinity_group The name of the affinity group to create a VM in. Either a location or an affinity_group may be specified, but not both. See Affinity Groups below. ssh_username The user to use to log into the newly-created VM to install Salt. ssh_password The password to use to log into the newly-created VM to install Salt. slot The environment to which the hosted service is deployed. Valid values are staging or pro‐ duction. When set to production, the resulting URL of the new VM will be <vm_name>.cloudapp.net. When set to staging, the resulting URL will contain a generated hash instead. media_link This is the URL of the container that will store the disk that this VM uses. Currently, this container must already exist. If a VM has previously been created in the associated account, a container should already exist. In the web interface, go into the Storage area and click one of the available storage selections. Click the Containers link, and then copy the URL from the container that will be used. It generally looks like: http://portalvhdabcdefghijklmn.blob.core.windows.net/vhds service_name The name of the service in which to create the VM. If this is not specified, then a ser‐ vice will be created with the same name as the VM. virtual_network_name Optional. The name of the virtual network for the VM to join. If this is not specified, then no virtual network will be joined. subnet_name Optional. The name of the subnet in the virtual network for the VM to join. Requires that a virtual_network_name is specified. Show Instance This action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. salt-cloud -a show_instance myinstance Destroying VMs There are certain options which can be specified in the global cloud configuration file (usually /etc/salt/cloud) which affect Salt Cloud's behavior when a VM is destroyed. cleanup_disks New in version 2015.8.0. Default is False. When set to True, Salt Cloud will wait for the VM to be destroyed, then attempt to destroy the main disk that is associated with the VM. cleanup_vhds New in version 2015.8.0. Default is False. Requires cleanup_disks to be set to True. When also set to True, Salt Cloud will ask Azure to delete the VHD associated with the disk that is also destroyed. cleanup_services New in version 2015.8.0. Default is False. Requires cleanup_disks to be set to True. When also set to True, Salt Cloud will wait for the disk to be destroyed, then attempt to remove the service that is associated with the VM. Because the disk belongs to the service, the disk must be destroyed before the service can be. Managing Hosted Services New in version 2015.8.0. An account can have one or more hosted services. A hosted service is required in order to create a VM. However, as mentioned above, if a hosted service is not specified when a VM is created, then one will automatically be created with the name of the name. The follow‐ ing functions are also available. create_service Create a hosted service. The following options are available. name Required. The name of the hosted service to create. label Required. A label to apply to the hosted service. description Optional. A longer description of the hosted service. location Required, if affinity_group is not set. The location in which to create the hosted ser‐ vice. Either the location or the affinity_group must be set, but not both. affinity_group Required, if location is not set. The affinity group in which to create the hosted ser‐ vice. Either the location or the affinity_group must be set, but not both. extended_properties Optional. Dictionary containing name/value pairs of hosted service properties. You can have a maximum of 50 extended property name/value pairs. The maximum length of the Name element is 64 characters, only alphanumeric characters and underscores are valid in the Name, and the name must start with a letter. The value has a maximum length of 255 char‐ acters. CLI Example The following example illustrates creating a hosted service. salt-cloud -f create_service my-azure name=my-service label=my-service location='West US' show_service Return details about a specific hosted service. Can also be called with get_service. salt-cloud -f show_storage my-azure name=my-service list_services List all hosted services associates with the subscription. salt-cloud -f list_services my-azure-config delete_service Delete a specific hosted service. salt-cloud -f delete_service my-azure name=my-service Managing Storage Accounts New in version 2015.8.0. Salt Cloud can manage storage accounts associated with the account. The following func‐ tions are available. Deprecated marked as deprecated are marked as such as per the SDK documentation, but are still included for completeness with the SDK. create_storage Create a storage account. The following options are supported. name Required. The name of the storage account to create. label Required. A label to apply to the storage account. description Optional. A longer description of the storage account. location Required, if affinity_group is not set. The location in which to create the storage account. Either the location or the affinity_group must be set, but not both. affinity_group Required, if location is not set. The affinity group in which to create the storage account. Either the location or the affinity_group must be set, but not both. extended_properties Optional. Dictionary containing name/value pairs of storage account properties. You can have a maximum of 50 extended property name/value pairs. The maximum length of the Name element is 64 characters, only alphanumeric characters and underscores are valid in the Name, and the name must start with a letter. The value has a maximum length of 255 charac‐ ters. geo_replication_enabled Deprecated. Replaced by the account_type parameter. account_type Specifies whether the account supports locally-redundant storage, geo-redundant storage, zone-redundant storage, or read access geo-redundant storage. Possible values are: · Standard_LRS · Standard_ZRS · Standard_GRS · Standard_RAGRS CLI Example The following example illustrates creating a storage account. salt-cloud -f create_storage my-azure name=my-storage label=my-storage location='West US' list_storage List all storage accounts associates with the subscription. salt-cloud -f list_storage my-azure-config show_storage Return details about a specific storage account. Can also be called with get_storage. salt-cloud -f show_storage my-azure name=my-storage update_storage Update details concerning a storage account. Any of the options available in create_stor‐ age can be used, but the name cannot be changed. salt-cloud -f update_storage my-azure name=my-storage label=my-storage delete_storage Delete a specific storage account. salt-cloud -f delete_storage my-azure name=my-storage show_storage_keys Returns the primary and secondary access keys for the specified storage account. salt-cloud -f show_storage_keys my-azure name=my-storage regenerate_storage_keys Regenerate storage account keys. Requires a key_type ("primary" or "secondary") to be specified. salt-cloud -f regenerate_storage_keys my-azure name=my-storage key_type=primary Managing Disks New in version 2015.8.0. When a VM is created, a disk will also be created for it. The following functions are available for managing disks. Deprecated marked as deprecated are marked as such as per the SDK documentation, but are still included for completeness with the SDK. show_disk Return details about a specific disk. Can also be called with get_disk. salt-cloud -f show_disk my-azure name=my-disk list_disks List all disks associates with the account. salt-cloud -f list_disks my-azure update_disk Update details for a disk. The following options are available. name Required. The name of the disk to update. has_operating_system Deprecated. label Required. The label for the disk. media_link Deprecated. The location of the disk in the account, including the storage container that it is in. This should not need to be changed. new_name Deprecated. If renaming the disk, the new name. os Deprecated. CLI Example The following example illustrates updating a disk. salt-cloud -f update_disk my-azure name=my-disk label=my-disk delete_disk Delete a specific disk. salt-cloud -f delete_disk my-azure name=my-disk Managing Service Certificates New in version 2015.8.0. Stored at the cloud service level, these certificates are used by your deployed services. For more information on service certificates, see the following link: · Manage Certificates The following functions are available. list_service_certificates List service certificates associated with the account. salt-cloud -f list_service_certificates my-azure show_service_certificate Show the data for a specific service certificate associated with the account. The name, thumbprint, and thumbalgorithm can be obtained from list_service_certificates. Can also be called with get_service_certificate. salt-cloud -f show_service_certificate my-azure name=my_service_certificate \ thumbalgorithm=sha1 thumbprint=0123456789ABCDEF add_service_certificate Add a service certificate to the account. This requires that a certificate already exists, which is then added to the account. For more information on creating the certificate itself, see: · Create a Service Certificate for Azure The following options are available. name Required. The name of the hosted service that the certificate will belong to. data Required. The base-64 encoded form of the pfx file. certificate_format Required. The service certificate format. The only supported value is pfx. password The certificate password. salt-cloud -f add_service_certificate my-azure name=my-cert \ data='...CERT_DATA...' certificate_format=pfx password=verybadpass delete_service_certificate Delete a service certificate from the account. The name, thumbprint, and thumbalgorithm can be obtained from list_service_certificates. salt-cloud -f delete_service_certificate my-azure \ name=my_service_certificate \ thumbalgorithm=sha1 thumbprint=0123456789ABCDEF Managing Management Certificates New in version 2015.8.0. A Azure management certificate is an X.509 v3 certificate used to authenticate an agent, such as Visual Studio Tools for Windows Azure or a client application that uses the Ser‐ vice Management API, acting on behalf of the subscription owner to manage subscription resources. Azure management certificates are uploaded to Azure and stored at the subscrip‐ tion level. The management certificate store can hold up to 100 certificates per subscrip‐ tion. These certificates are used to authenticate your Windows Azure deployment. For more information on management certificates, see the following link. · Manage Certificates The following functions are available. list_management_certificates List management certificates associated with the account. salt-cloud -f list_management_certificates my-azure show_management_certificate Show the data for a specific management certificate associated with the account. The name, thumbprint, and thumbalgorithm can be obtained from list_management_certificates. Can also be called with get_management_certificate. salt-cloud -f show_management_certificate my-azure name=my_management_certificate \ thumbalgorithm=sha1 thumbprint=0123456789ABCDEF add_management_certificate Management certificates must have a key length of at least 2048 bits and should reside in the Personal certificate store. When the certificate is installed on the client, it should contain the private key of the certificate. To upload to the certificate to the Microsoft Azure Management Portal, you must export it as a .cer format file that does not contain the private key. For more information on creating management certificates, see the follow‐ ing link: · Create and Upload a Management Certificate for Azure The following options are available. public_key A base64 representation of the management certificate public key. thumbprint The thumb print that uniquely identifies the management certificate. data The certificate's raw data in base-64 encoded .cer format. salt-cloud -f add_management_certificate my-azure public_key='...PUBKEY...' \ thumbprint=0123456789ABCDEF data='...CERT_DATA...' delete_management_certificate Delete a management certificate from the account. The thumbprint can be obtained from list_management_certificates. salt-cloud -f delete_management_certificate my-azure thumbprint=0123456789ABCDEF Virtual Network Management New in version 2015.8.0. The following are functions for managing virtual networks. list_virtual_networks List input endpoints associated with the deployment. salt-cloud -f list_virtual_networks my-azure service=myservice deployment=mydeployment Managing Input Endpoints New in version 2015.8.0. Input endpoints are used to manage port access for roles. Because endpoints cannot be man‐ aged by the Azure Python SDK, Salt Cloud uses the API directly. With versions of Python before 2.7.9, the requests-python package needs to be installed in order for this to work. Additionally, the following needs to be set in the master's configuration file: requests_lib: True The following functions are available. list_input_endpoints List input endpoints associated with the deployment salt-cloud -f list_input_endpoints my-azure service=myservice deployment=mydeployment show_input_endpoint Show an input endpoint associated with the deployment salt-cloud -f show_input_endpoint my-azure service=myservice \ deployment=mydeployment name=SSH add_input_endpoint Add an input endpoint to the deployment. Please note that there may be a delay before the changes show up. The following options are available. service Required. The name of the hosted service which the VM belongs to. deployment Required. The name of the deployment that the VM belongs to. If the VM was created with Salt Cloud, the deployment name probably matches the VM name. role Required. The name of the role that the VM belongs to. If the VM was created with Salt Cloud, the role name probably matches the VM name. name Required. The name of the input endpoint. This typically matches the port that the end‐ point is set to. For instance, port 22 would be called SSH. port Required. The public (Internet-facing) port that is used for the endpoint. local_port Optional. The private port on the VM itself that will be matched with the port. This is typically the same as the port. If this value is not specified, it will be copied from port. protocol Required. Either tcp or udp. enable_direct_server_return Optional. If an internal load balancer exists in the account, it can be used with a direct server return. The default value is False. Please see the following article for an expla‐ nation of this option. · Load Balancing for Azure Infrastructure Services timeout_for_tcp_idle_connection Optional. The default value is 4. Please see the following article for an explanation of this option. · Configurable Idle Timeout for Azure Load Balancer CLI Example The following example illustrates adding an input endpoint. salt-cloud -f add_input_endpoint my-azure service=myservice \ deployment=mydeployment role=myrole name=HTTP local_port=80 \ port=80 protocol=tcp enable_direct_server_return=False \ timeout_for_tcp_idle_connection=4 update_input_endpoint Updates the details for a specific input endpoint. All options from add_input_endpoint are supported. salt-cloud -f update_input_endpoint my-azure service=myservice \ deployment=mydeployment role=myrole name=HTTP local_port=80 \ port=80 protocol=tcp enable_direct_server_return=False \ timeout_for_tcp_idle_connection=4 delete_input_endpoint Delete an input endpoint from the deployment. Please note that there may be a delay before the changes show up. The following items are required. CLI Example The following example illustrates deleting an input endpoint. service The name of the hosted service which the VM belongs to. deployment The name of the deployment that the VM belongs to. If the VM was created with Salt Cloud, the deployment name probably matches the VM name. role The name of the role that the VM belongs to. If the VM was created with Salt Cloud, the role name probably matches the VM name. name The name of the input endpoint. This typically matches the port that the endpoint is set to. For instance, port 22 would be called SSH. salt-cloud -f delete_input_endpoint my-azure service=myservice \ deployment=mydeployment role=myrole name=HTTP Managing Affinity Groups New in version 2015.8.0. Affinity groups allow you to group your Azure services to optimize performance. All ser‐ vices and VMs within an affinity group will be located in the same region. For more infor‐ mation on Affinity groups, see the following link: · Create an Affinity Group in the Management Portal The following functions are available. list_affinity_groups List input endpoints associated with the account salt-cloud -f list_affinity_groups my-azure show_affinity_group Show an affinity group associated with the account salt-cloud -f show_affinity_group my-azure service=myservice \ deployment=mydeployment name=SSH create_affinity_group Create a new affinity group. The following options are supported. name Required. The name of the new affinity group. location Required. The region in which the affinity group lives. label Required. A label describing the new affinity group. description Optional. A longer description of the affinity group. salt-cloud -f create_affinity_group my-azure name=my_affinity_group \ label=my-affinity-group location='West US' update_affinity_group Update an affinity group's properties salt-cloud -f update_affinity_group my-azure name=my_group label=my_group delete_affinity_group Delete a specific affinity group associated with the account salt-cloud -f delete_affinity_group my-azure name=my_affinity_group Managing Blob Storage New in version 2015.8.0. Azure storage containers and their contents can be managed with Salt Cloud. This is not as elegant as using one of the other available clients in Windows, but it benefits Linux and Unix users, as there are fewer options available on those platforms. Blob Storage Configuration Blob storage must be configured differently than the standard Azure configuration. Both a storage_account and a storage_key must be specified either through the Azure provider con‐ figuration (in addition to the other Azure configuration) or via the command line. storage_account: mystorage storage_key: ffhj334fDSGFEGDFGFDewr34fwfsFSDFwe== storage_account This is one of the storage accounts that is available via the list_storage function. storage_key Both a primary and a secondary storage_key can be obtained by running the show_stor‐ age_keys function. Either key may be used. Blob Functions The following functions are made available through Salt Cloud for managing blog storage. make_blob_url Creates the URL to access a blob salt-cloud -f make_blob_url my-azure container=mycontainer blob=myblob container Name of the container. blob Name of the blob. account Name of the storage account. If not specified, derives the host base from the provider configuration. protocol Protocol to use: 'http' or 'https'. If not specified, derives the host base from the provider configuration. host_base Live host base URL. If not specified, derives the host base from the provider configura‐ tion. list_storage_containers List containers associated with the storage account salt-cloud -f list_storage_containers my-azure create_storage_container Create a storage container salt-cloud -f create_storage_container my-azure name=mycontainer name Name of container to create. meta_name_values Optional. A dict with name_value pairs to associate with the container as metadata. Exam‐ ple:{'Category':'test'} blob_public_access Optional. Possible values include: container, blob fail_on_exist Specify whether to throw an exception when the container exists. show_storage_container Show a container associated with the storage account salt-cloud -f show_storage_container my-azure name=myservice name Name of container to show. show_storage_container_metadata Show a storage container's metadata salt-cloud -f show_storage_container_metadata my-azure name=myservice name Name of container to show. lease_id If specified, show_storage_container_metadata only succeeds if the container's lease is active and matches this ID. set_storage_container_metadata Set a storage container's metadata salt-cloud -f set_storage_container my-azure name=mycontainer \ x_ms_meta_name_values='{"my_name": "my_value"}' name Name of existing container. meta_name_values ```````````` A dict containing name, value for metadata. Example: {'category':'test'} lease_id ```` If specified, set_storage_con‐ tainer_metadata only succeeds if the container's lease is active and matches this ID. show_storage_container_acl Show a storage container's acl salt-cloud -f show_storage_container_acl my-azure name=myservice name Name of existing container. lease_id If specified, show_storage_container_acl only succeeds if the container's lease is active and matches this ID. set_storage_container_acl Set a storage container's acl salt-cloud -f set_storage_container my-azure name=mycontainer name Name of existing container. signed_identifiers SignedIdentifers instance blob_public_access Optional. Possible values include: container, blob lease_id If specified, set_storage_container_acl only succeeds if the container's lease is active and matches this ID. delete_storage_container Delete a container associated with the storage account salt-cloud -f delete_storage_container my-azure name=mycontainer name Name of container to create. fail_not_exist Specify whether to throw an exception when the container exists. lease_id If specified, delete_storage_container only succeeds if the container's lease is active and matches this ID. lease_storage_container Lease a container associated with the storage account salt-cloud -f lease_storage_container my-azure name=mycontainer name Name of container to create. lease_action Required. Possible values: acquire|renew|release|break|change lease_id Required if the container has an active lease. lease_duration Specifies the duration of the lease, in seconds, or negative one (-1) for a lease that never expires. A non-infinite lease can be between 15 and 60 seconds. A lease duration cannot be changed using renew or change. For backwards compatibility, the default is 60, and the value is only used on an acquire operation. lease_break_period Optional. For a break operation, this is the proposed duration of seconds that the lease should continue before it is broken, between 0 and 60 seconds. This break period is only used if it is shorter than the time remaining on the lease. If longer, the time remaining on the lease is used. A new lease will not be available before the break period has expired, but the lease may be held for longer than the break period. If this header does not appear with a break operation, a fixed-duration lease breaks after the remaining lease period elapses, and an infinite lease breaks immediately. proposed_lease_id Optional for acquire, required for change. Proposed lease ID, in a GUID string format. list_blobs List blobs associated with the container salt-cloud -f list_blobs my-azure container=mycontainer container The name of the storage container prefix Optional. Filters the results to return only blobs whose names begin with the specified prefix. marker Optional. A string value that identifies the portion of the list to be returned with the next list operation. The operation returns a marker value within the response body if the list returned was not complete. The marker value may then be used in a subsequent call to request the next set of list items. The marker value is opaque to the client. maxresults Optional. Specifies the maximum number of blobs to return, including all BlobPrefix ele‐ ments. If the request does not specify maxresults or specifies a value greater than 5,000, the server will return up to 5,000 items. Setting maxresults to a value less than or equal to zero results in error response code 400 (Bad Request). include Optional. Specifies one or more datasets to include in the response. To specify more than one of these options on the URI, you must separate each option with a comma. Valid values are: snapshots: Specifies that snapshots should be included in the enumeration. Snapshots are listed from oldest to newest in the response. metadata: Specifies that blob metadata be returned in the response. uncommittedblobs: Specifies that blobs for which blocks have been uploaded, but which have not been committed using Put Block List (REST API), be included in the response. copy: Version 2012-02-12 and newer. Specifies that metadata related to any current or previous Copy Blob operation should be included in the response. delimiter Optional. When the request includes this parameter, the operation returns a BlobPrefix element in the response body that acts as a placeholder for all blobs whose names begin with the same substring up to the appearance of the delimiter character. The delimiter may be a single character or a string. show_blob_service_properties Show a blob's service properties salt-cloud -f show_blob_service_properties my-azure set_blob_service_properties Sets the properties of a storage account's Blob service, including Windows Azure Storage Analytics. You can also use this operation to set the default request version for all incoming requests that do not have a version specified. salt-cloud -f set_blob_service_properties my-azure properties a StorageServiceProperties object. timeout Optional. The timeout parameter is expressed in seconds. show_blob_properties Returns all user-defined metadata, standard HTTP properties, and system properties for the blob. salt-cloud -f show_blob_properties my-azure container=mycontainer blob=myblob container Name of existing container. blob Name of existing blob. lease_id Required if the blob has an active lease. set_blob_properties Set a blob's properties salt-cloud -f set_blob_properties my-azure container Name of existing container. blob Name of existing blob. blob_cache_control Optional. Modifies the cache control string for the blob. blob_content_type Optional. Sets the blob's content type. blob_content_md5 Optional. Sets the blob's MD5 hash. blob_content_encoding Optional. Sets the blob's content encoding. blob_content_language Optional. Sets the blob's content language. lease_id Required if the blob has an active lease. blob_content_disposition Optional. Sets the blob's Content-Disposition header. The Content-Disposition response header field conveys additional information about how to process the response payload, and also can be used to attach additional metadata. For example, if set to attachment, it indicates that the user-agent should not display the response, but instead show a Save As dialog with a filename other than the blob name specified. put_blob Upload a blob salt-cloud -f put_blob my-azure container=base name=top.sls blob_path=/srv/salt/top.sls salt-cloud -f put_blob my-azure container=base name=content.txt blob_content='Some content ↲ ' container Name of existing container. name Name of existing blob. blob_path The path on the local machine of the file to upload as a blob. Either this or blob_content must be specified. blob_content The actual content to be uploaded as a blob. Either this or blob_path must me specified. cache_control Optional. The Blob service stores this value but does not use or modify it. content_language Optional. Specifies the natural languages used by this resource. content_md5 Optional. An MD5 hash of the blob content. This hash is used to verify the integrity of the blob during transport. When this header is specified, the storage service checks the hash that has arrived with the one that was sent. If the two hashes do not match, the operation will fail with error code 400 (Bad Request). blob_content_type Optional. Set the blob's content type. blob_content_encoding Optional. Set the blob's content encoding. blob_content_language Optional. Set the blob's content language. blob_content_md5 Optional. Set the blob's MD5 hash. blob_cache_control Optional. Sets the blob's cache control. meta_name_values A dict containing name, value for metadata. lease_id Required if the blob has an active lease. get_blob Download a blob salt-cloud -f get_blob my-azure container=base name=top.sls local_path=/srv/salt/top.sls salt-cloud -f get_blob my-azure container=base name=content.txt return_content=True container Name of existing container. name Name of existing blob. local_path The path on the local machine to download the blob to. Either this or return_content must be specified. return_content Whether or not to return the content directly from the blob. If specified, must be True or False. Either this or the local_path must be specified. snapshot Optional. The snapshot parameter is an opaque DateTime value that, when present, specifies the blob snapshot to retrieve. lease_id Required if the blob has an active lease. progress_callback callback for progress with signature function(current, total) where current is the number of bytes transferred so far, and total is the size of the blob. max_connections Maximum number of parallel connections to use when the blob size exceeds 64MB. Set to 1 to download the blob chunks sequentially. Set to 2 or more to download the blob chunks in parallel. This uses more system resources but will download faster. max_retries Number of times to retry download of blob chunk if an error occurs. retry_wait Sleep time in secs between retries. Getting Started With DigitalOcean DigitalOcean is a public cloud host that specializes in Linux instances. Configuration Using Salt for DigitalOcean requires a personal_access_token, an ssh_key_file, and at least one SSH key name in ssh_key_names. More ssh_key_names can be added by separating each key with a comma. The personal_access_token can be found in the DigitalOcean web interface in the "Apps & API" section. The SSH key name can be found under the "SSH Keys" section. # Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my-digitalocean-config: driver: digital_ocean personal_access_token: xxx ssh_key_file: /path/to/ssh/key/file ssh_key_names: my-key-name,my-key-name-2 location: New York 1 NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Profiles Cloud Profiles Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.pro‐ files.d/ directory: digitalocean-ubuntu: provider: my-digitalocean-config image: 14.04 x64 size: 512MB location: New York 1 private_networking: True backups_enabled: True ipv6: True Locations can be obtained using the --list-locations option for the salt-cloud command: # salt-cloud --list-locations my-digitalocean-config my-digitalocean-config: ---------- digital_ocean: ---------- Amsterdam 1: ---------- available: False features: [u'backups'] name: Amsterdam 1 sizes: [] slug: ams1 ...SNIP... Sizes can be obtained using the --list-sizes option for the salt-cloud command: # salt-cloud --list-sizes my-digitalocean-config my-digitalocean-config: ---------- digital_ocean: ---------- 512MB: ---------- cost_per_hour: 0.00744 cost_per_month: 5.0 cpu: 1 disk: 20 id: 66 memory: 512 name: 512MB slug: None ...SNIP... Images can be obtained using the --list-images option for the salt-cloud command: # salt-cloud --list-images my-digitalocean-config my-digitalocean-config: ---------- digital_ocean: ---------- 10.1: ---------- created_at: 2015-01-20T20:04:34Z distribution: FreeBSD id: 10144573 min_disk_size: 20 name: 10.1 public: True ...SNIP... Profile Specifics: ssh_username If using a FreeBSD image from Digital Ocean, you'll need to set the ssh_username setting to freebsd in your profile configuration. digitalocean-freebsd: provider: my-digitalocean-config image: 10.2 size: 512MB ssh_username: freebsd Miscellaneous Information NOTE: DigitalOcean's concept of Applications is nothing more than a pre-configured instance (same as a normal Droplet). You will find examples such Docker 0.7 Ubuntu 13.04 x64 and Wordpress on Ubuntu 12.10 when using the --list-images option. These names can be used just like the rest of the standard instances when specifying an image in the cloud pro‐ file configuration. NOTE: If your domain's DNS is managed with DigitalOcean, you can automatically create A-records for newly created droplets. Use create_dns_record: True in your config to enable this. Add delete_dns_record: True to also delete records when a droplet is destroyed. NOTE: Additional documentation is available from DigitalOcean. Getting Started With AWS EC2 Amazon EC2 is a very widely used public cloud platform and one of the core platforms Salt Cloud has been built to support. Previously, the suggested driver for AWS EC2 was the aws driver. This has been deprecated in favor of the ec2 driver. Configuration using the old aws driver will still function, but that driver is no longer in active development. Dependencies This driver requires the Python requests library to be installed. Configuration The following example illustrates some of the options that can be set. These parameters are discussed in more detail below. # Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my-ec2-southeast-public-ips: # Set up the location of the salt master # minion: master: saltmaster.example.com # Set up grains information, which will be common for all nodes # using this provider grains: node_type: broker release: 1.0.1 # Specify whether to use public or private IP for deploy script. # # Valid options are: # private_ips - The salt-cloud command is run inside the EC2 # public_ips - The salt-cloud command is run outside of EC2 # ssh_interface: public_ips # Optionally configure the Windows credential validation number of # retries and delay between retries. This defaults to 10 retries # with a one second delay betwee retries win_deploy_auth_retries: 10 win_deploy_auth_retry_delay: 1 # Set the EC2 access credentials (see below) # id: 'use-instance-role-credentials' key: 'use-instance-role-credentials' # Make sure this key is owned by root with permissions 0400. # private_key: /etc/salt/my_test_key.pem keyname: my_test_key securitygroup: default # Optionally configure default region # Use salt-cloud --list-locations <provider> to obtain valid regions # location: ap-southeast-1 availability_zone: ap-southeast-1b # Configure which user to use to run the deploy script. This setting is # dependent upon the AMI that is used to deploy. It is usually safer to # configure this individually in a profile, than globally. Typical users # are: # # Amazon Linux -> ec2-user # RHEL -> ec2-user # CentOS -> ec2-user # Ubuntu -> ubuntu # ssh_username: ec2-user # Optionally add an IAM profile iam_profile: 'arn:aws:iam::123456789012:instance-profile/ExampleInstanceProfile' driver: ec2 my-ec2-southeast-private-ips: # Set up the location of the salt master # minion: master: saltmaster.example.com # Specify whether to use public or private IP for deploy script. # # Valid options are: # private_ips - The salt-master is also hosted with EC2 # public_ips - The salt-master is hosted outside of EC2 # ssh_interface: private_ips # Optionally configure the Windows credential validation number of # retries and delay between retries. This defaults to 10 retries # with a one second delay betwee retries win_deploy_auth_retries: 10 win_deploy_auth_retry_delay: 1 # Set the EC2 access credentials (see below) # id: 'use-instance-role-credentials' key: 'use-instance-role-credentials' # Make sure this key is owned by root with permissions 0400. # private_key: /etc/salt/my_test_key.pem keyname: my_test_key # This one should NOT be specified if VPC was not configured in AWS to be # the default. It might cause an error message which says that network # interfaces and an instance-level security groups may not be specified # on the same request. # securitygroup: default # Optionally configure default region # location: ap-southeast-1 availability_zone: ap-southeast-1b # Configure which user to use to run the deploy script. This setting is # dependent upon the AMI that is used to deploy. It is usually safer to # configure this individually in a profile, than globally. Typical users # are: # # Amazon Linux -> ec2-user # RHEL -> ec2-user # CentOS -> ec2-user # Ubuntu -> ubuntu # ssh_username: ec2-user # Optionally add an IAM profile iam_profile: 'my other profile name' driver: ec2 NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Access Credentials The id and key settings may be found in the Security Credentials area of the AWS Account page: https://portal.aws.amazon.com/gp/aws/securityCredentials Both are located in the Access Credentials area of the page, under the Access Keys tab. The id setting is labeled Access Key ID, and the key setting is labeled Secret Access Key. Note: if either id or key is set to 'use-instance-role-credentials' it is assumed that Salt is running on an AWS instance, and the instance role credentials will be retrieved and used. Since both the id and key are required parameters for the AWS ec2 provider, it is recommended to set both to 'use-instance-role-credentials' for this functionality. A "static" and "permanent" Access Key ID and Secret Key can be specified, but this is not recommended. Instance role keys are rotated on a regular basis, and are the recommended method of specifying AWS credentials. Windows Deploy Timeouts For Windows instances, it may take longer than normal for the instance to be ready. In these circumstances, the provider configuration can be configured with a win_deploy_auth_retries and/or a win_deploy_auth_retry_delay setting, which default to 10 retries and a one second delay between retries. These retries and timeouts relate to val‐ idating the Administrator password once AWS provides the credentials via the AWS API. Windows Deploy Timeouts For Windows instances, it may take longer than normal for the instance to be ready. In these circumstances, the provider configuration can be configured with a win_deploy_auth_retries and/or a win_deploy_auth_retry_delay setting, which default to 10 retries and a one second delay between retries. These retries and timeouts relate to val‐ idating the Administrator password once AWS provides the credentials via the AWS API. Key Pairs In order to create an instance with Salt installed and configured, a key pair will need to be created. This can be done in the EC2 Management Console, in the Key Pairs area. These key pairs are unique to a specific region. Keys in the us-east-1 region can be configured at: https://console.aws.amazon.com/ec2/home?region=us-east-1#s=KeyPairs Keys in the us-west-1 region can be configured at https://console.aws.amazon.com/ec2/home?region=us-west-1#s=KeyPairs ...and so on. When creating a key pair, the browser will prompt to download a pem file. This file must be placed in a directory accessible by Salt Cloud, with permissions set to either 0400 or 0600. Security Groups An instance on EC2 needs to belong to a security group. Like key pairs, these are unique to a specific region. These are also configured in the EC2 Management Console. Security groups for the us-east-1 region can be configured at: https://console.aws.amazon.com/ec2/home?region=us-east-1#s=SecurityGroups ...and so on. A security group defines firewall rules which an instance will adhere to. If the salt-mas‐ ter is configured outside of EC2, the security group must open the SSH port (usually port 22) in order for Salt Cloud to install Salt. IAM Profile Amazon EC2 instances support the concept of an instance profile, which is a logical con‐ tainer for the IAM role. At the time that you launch an EC2 instance, you can associate the instance with an instance profile, which in turn corresponds to the IAM role. Any software that runs on the EC2 instance is able to access AWS using the permissions associ‐ ated with the IAM role. Scaffolding the profile is a 2-step configuration process: 1. Configure an IAM Role from the IAM Management Console. 2. Attach this role to a new profile. It can be done with the AWS CLI: > aws iam create-instance-profile --instance-profile-name PROFILE_NAME > aws iam add-role-to-instance-profile --instance-profile-name PROFILE_NAME --role- ↲ name ROLE_NAME Once the profile is created, you can use the PROFILE_NAME to configure your cloud pro‐ files. Cloud Profiles Set up an initial profile at /etc/salt/cloud.profiles: base_ec2_private: provider: my-ec2-southeast-private-ips image: ami-e565ba8c size: t2.micro ssh_username: ec2-user base_ec2_public: provider: my-ec2-southeast-public-ips image: ami-e565ba8c size: t2.micro ssh_username: ec2-user base_ec2_db: provider: my-ec2-southeast-public-ips image: ami-e565ba8c size: m1.xlarge ssh_username: ec2-user volumes: - { size: 10, device: /dev/sdf } - { size: 10, device: /dev/sdg, type: io1, iops: 1000 } - { size: 10, device: /dev/sdh, type: io1, iops: 1000 } # optionally add tags to profile: tag: {'Environment': 'production', 'Role': 'database'} # force grains to sync after install sync_after_install: grains base_ec2_vpc: provider: my-ec2-southeast-public-ips image: ami-a73264ce size: m1.xlarge ssh_username: ec2-user script: /etc/salt/cloud.deploy.d/user_data.sh network_interfaces: - DeviceIndex: 0 PrivateIpAddresses: - Primary: True #auto assign public ip (not EIP) AssociatePublicIpAddress: True SubnetId: subnet-813d4bbf SecurityGroupId: - sg-750af413 del_root_vol_on_destroy: True del_all_vol_on_destroy: True volumes: - { size: 10, device: /dev/sdf } - { size: 10, device: /dev/sdg, type: io1, iops: 1000 } - { size: 10, device: /dev/sdh, type: io1, iops: 1000 } tag: {'Environment': 'production', 'Role': 'database'} sync_after_install: grains The profile can now be realized with a salt command: # salt-cloud -p base_ec2 ami.example.com # salt-cloud -p base_ec2_public ami.example.com # salt-cloud -p base_ec2_private ami.example.com This will create an instance named ami.example.com in EC2. The minion that is installed on this instance will have an id of ami.example.com. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: # salt 'ami.example.com' test.ping Required Settings The following settings are always required for EC2: # Set the EC2 login data my-ec2-config: id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: quick-start private_key: /root/test.pem driver: ec2 Optional Settings EC2 allows a userdata file to be passed to the instance to be created. This functionality was added to Salt in the 2015.5.0 release. my-ec2-config: # Pass userdata to the instance to be created userdata_file: /etc/salt/my-userdata-file EC2 allows a location to be set for servers to be deployed in. Availability zones exist inside regions, and may be added to increase specificity. my-ec2-config: # Optionally configure default region location: ap-southeast-1 availability_zone: ap-southeast-1b EC2 instances can have a public or private IP, or both. When an instance is deployed, Salt Cloud needs to log into it via SSH to run the deploy script. By default, the public IP will be used for this. If the salt-cloud command is run from another EC2 instance, the private IP should be used. my-ec2-config: # Specify whether to use public or private IP for deploy script # private_ips or public_ips ssh_interface: public_ips Many EC2 instances do not allow remote access to the root user by default. Instead, another user must be used to run the deploy script using sudo. Some common usernames include ec2-user (for Amazon Linux), ubuntu (for Ubuntu instances), admin (official Debian) and bitnami (for images provided by Bitnami). my-ec2-config: # Configure which user to use to run the deploy script ssh_username: ec2-user Multiple usernames can be provided, in which case Salt Cloud will attempt to guess the correct username. This is mostly useful in the main configuration file: my-ec2-config: ssh_username: - ec2-user - ubuntu - admin - bitnami Multiple security groups can also be specified in the same fashion: my-ec2-config: securitygroup: - default - extra Your instances may optionally make use of EC2 Spot Instances. The following example will request that spot instances be used and your maximum bid will be $0.10. Keep in mind that different spot prices may be needed based on the current value of the various EC2 instance sizes. You can check current and past spot instance pricing via the EC2 API or AWS Con‐ sole. my-ec2-config: spot_config: spot_price: 0.10 By default, the spot instance type is set to 'one-time', meaning it will be launched and, if it's ever terminated for whatever reason, it will not be recreated. If you would like your spot instances to be relaunched after a termination (by your or AWS), set the type to 'persistent'. NOTE: Spot instances are a great way to save a bit of money, but you do run the risk of losing your spot instances if the current price for the instance size goes above your max‐ imum bid. The following parameters may be set in the cloud configuration file to control various aspects of the spot instance launching: · wait_for_spot_timeout: seconds to wait before giving up on spot instance launch (default=600) · wait_for_spot_interval: seconds to wait in between polling requests to determine if a spot instance is available (default=30) · wait_for_spot_interval_multiplier: a multiplier to add to the interval in between requests, which is useful if AWS is throttling your requests (default=1) · wait_for_spot_max_failures: maximum number of failures before giving up on launching your spot instance (default=10) If you find that you're being throttled by AWS while polling for spot instances, you can set the following in your core cloud configuration file that will double the polling interval after each request to AWS. wait_for_spot_interval: 1 wait_for_spot_interval_multiplier: 2 See the AWS Spot Instances documentation for more information. Block device mappings enable you to specify additional EBS volumes or instance store vol‐ umes when the instance is launched. This setting is also available on each cloud profile. Note that the number of instance stores varies by instance type. If more mappings are provided than are supported by the instance type, mappings will be created in the order provided and additional mappings will be ignored. Consult the AWS documentation for a listing of the available instance stores, and device names. my-ec2-config: block_device_mappings: - DeviceName: /dev/sdb VirtualName: ephemeral0 - DeviceName: /dev/sdc VirtualName: ephemeral1 You can also use block device mappings to change the size of the root device at the provi‐ sioning time. For example, assuming the root device is '/dev/sda', you can set its size to 100G by using the following configuration. my-ec2-config: block_device_mappings: - DeviceName: /dev/sda Ebs.VolumeSize: 100 Ebs.VolumeType: gp2 Ebs.SnapshotId: dummy0 - DeviceName: /dev/sdb # required for devices > 2TB Ebs.VolumeType: gp2 Ebs.VolumeSize: 3001 Existing EBS volumes may also be attached (not created) to your instances or you can cre‐ ate new EBS volumes based on EBS snapshots. To simply attach an existing volume use the volume_id parameter. device: /dev/xvdj volume_id: vol-12345abcd Or, to create a volume from an EBS snapshot, use the snapshot parameter. device: /dev/xvdj snapshot: snap-abcd12345 Note that volume_id will take precedence over the snapshot parameter. Tags can be set once an instance has been launched. my-ec2-config: tag: tag0: value tag1: value Modify EC2 Tags One of the features of EC2 is the ability to tag resources. In fact, under the hood, the names given to EC2 instances by salt-cloud are actually just stored as a tag called Name. Salt Cloud has the ability to manage these tags: salt-cloud -a get_tags mymachine salt-cloud -a set_tags mymachine tag1=somestuff tag2='Other stuff' salt-cloud -a del_tags mymachine tag1,tag2,tag3 It is possible to manage tags on any resource in EC2 with a Resource ID, not just instances: salt-cloud -f get_tags my_ec2 resource_id=af5467ba salt-cloud -f set_tags my_ec2 resource_id=af5467ba tag1=somestuff salt-cloud -f del_tags my_ec2 resource_id=af5467ba tag1,tag2,tag3 Rename EC2 Instances As mentioned above, EC2 instances are named via a tag. However, renaming an instance by renaming its tag will cause the salt keys to mismatch. A rename function exists which renames both the instance, and the salt keys. salt-cloud -a rename mymachine newname=yourmachine EC2 Termination Protection EC2 allows the user to enable and disable termination protection on a specific instance. An instance with this protection enabled cannot be destroyed. salt-cloud -a enable_term_protect mymachine salt-cloud -a disable_term_protect mymachine Rename on Destroy When instances on EC2 are destroyed, there will be a lag between the time that the action is sent, and the time that Amazon cleans up the instance. During this time, the instance still retails a Name tag, which will cause a collision if the creation of an instance with the same name is attempted before the cleanup occurs. In order to avoid such collisions, Salt Cloud can be configured to rename instances when they are destroyed. The new name will look something like: myinstance-DEL20f5b8ad4eb64ed88f2c428df80a1a0c In order to enable this, add rename_on_destroy line to the main configuration file: my-ec2-config: rename_on_destroy: True Listing Images Normally, images can be queried on a cloud provider by passing the --list-images argument to Salt Cloud. This still holds true for EC2: salt-cloud --list-images my-ec2-config However, the full list of images on EC2 is extremely large, and querying all of the avail‐ able images may cause Salt Cloud to behave as if frozen. Therefore, the default behavior of this option may be modified, by adding an owner argument to the provider configuration: owner: aws-marketplace The possible values for this setting are amazon, aws-marketplace, self, <AWS account ID> or all. The default setting is amazon. Take note that all and aws-marketplace may cause Salt Cloud to appear as if it is freezing, as it tries to handle the large amount of data. It is also possible to perform this query using different settings without modifying the configuration files. To do this, call the avail_images function directly: salt-cloud -f avail_images my-ec2-config owner=aws-marketplace EC2 Images The following are lists of available AMI images, generally sorted by OS. These lists are on 3rd-party websites, are not managed by Salt Stack in any way. They are provided here as a reference for those who are interested, and contain no warranty (express or implied) from anyone affiliated with Salt Stack. Most of them have never been used, much less tested, by the Salt Stack team. · Arch Linux · FreeBSD · Fedora · CentOS · Ubuntu · Debian · OmniOS · All Images on Amazon show_image This is a function that describes an AMI on EC2. This will give insight as to the defaults that will be applied to an instance using a particular AMI. $ salt-cloud -f show_image ec2 image=ami-fd20ad94 show_instance This action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. $ salt-cloud -a show_instance myinstance ebs_optimized This argument enables switching of the EbsOptimized setting which default to 'false'. Indicates whether the instance is optimized for EBS I/O. This optimization provides dedi‐ cated throughput to Amazon EBS and an optimized configuration stack to provide optimal Amazon EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS-optimized instance. This setting can be added to the profile or map file for an instance. If set to True, this setting will enable an instance to be EbsOptimized ebs_optimized: True This can also be set as a cloud provider setting in the EC2 cloud configuration: my-ec2-config: ebs_optimized: True del_root_vol_on_destroy This argument overrides the default DeleteOnTermination setting in the AMI for the EBS root volumes for an instance. Many AMIs contain 'false' as a default, resulting in orphaned volumes in the EC2 account, which may unknowingly be charged to the account. This setting can be added to the profile or map file for an instance. If set, this setting will apply to the root EBS volume del_root_vol_on_destroy: True This can also be set as a cloud provider setting in the EC2 cloud configuration: my-ec2-config: del_root_vol_on_destroy: True del_all_vols_on_destroy This argument overrides the default DeleteOnTermination setting in the AMI for the not-root EBS volumes for an instance. Many AMIs contain 'false' as a default, resulting in orphaned volumes in the EC2 account, which may unknowingly be charged to the account. This setting can be added to the profile or map file for an instance. If set, this setting will apply to any (non-root) volumes that were created by salt-cloud using the 'volumes' setting. The volumes will not be deleted under the following conditions * If a volume is detached before terminating the instance * If a volume is created without this setting and attached to the instance del_all_vols_on_destroy: True This can also be set as a cloud provider setting in the EC2 cloud configuration: my-ec2-config: del_all_vols_on_destroy: True The setting for this may be changed on all volumes of an existing instance using one of the following commands: salt-cloud -a delvol_on_destroy myinstance salt-cloud -a keepvol_on_destroy myinstance salt-cloud -a show_delvol_on_destroy myinstance The setting for this may be changed on a volume on an existing instance using one of the following commands: salt-cloud -a delvol_on_destroy myinstance device=/dev/sda1 salt-cloud -a delvol_on_destroy myinstance volume_id=vol-1a2b3c4d salt-cloud -a keepvol_on_destroy myinstance device=/dev/sda1 salt-cloud -a keepvol_on_destroy myinstance volume_id=vol-1a2b3c4d salt-cloud -a show_delvol_on_destroy myinstance device=/dev/sda1 salt-cloud -a show_delvol_on_destroy myinstance volume_id=vol-1a2b3c4d EC2 Termination Protection EC2 allows the user to enable and disable termination protection on a specific instance. An instance with this protection enabled cannot be destroyed. The EC2 driver adds a show_term_protect action to the regular EC2 functionality. salt-cloud -a show_term_protect mymachine salt-cloud -a enable_term_protect mymachine salt-cloud -a disable_term_protect mymachine Alternate Endpoint Normally, EC2 endpoints are build using the region and the service_url. The resulting end‐ point would follow this pattern: ec2.<region>.<service_url> This results in an endpoint that looks like: ec2.us-east-1.amazonaws.com There are other projects that support an EC2 compatibility layer, which this scheme does not account for. This can be overridden by specifying the endpoint directly in the main cloud configuration file: my-ec2-config: endpoint: myendpoint.example.com:1138/services/Cloud Volume Management The EC2 driver has several functions and actions for management of EBS volumes. Creating Volumes A volume may be created, independent of an instance. A zone must be specified. A size or a snapshot may be specified (in GiB). If neither is given, a default size of 10 GiB will be used. If a snapshot is given, the size of the snapshot will be used. The following parameters may also be set (when providing a snapshot OR size): · type: choose between standard (magnetic disk), gp2 (SSD), or io1 (provisioned IOPS). (default=standard) · iops: the number of IOPS (only applicable to io1 volumes) (default varies on volume size) · encrypted: enable encryption on the volume (default=false) salt-cloud -f create_volume ec2 zone=us-east-1b salt-cloud -f create_volume ec2 zone=us-east-1b size=10 salt-cloud -f create_volume ec2 zone=us-east-1b snapshot=snap12345678 salt-cloud -f create_volume ec2 size=10 type=standard salt-cloud -f create_volume ec2 size=10 type=gp2 salt-cloud -f create_volume ec2 size=10 type=io1 iops=1000 Attaching Volumes Unattached volumes may be attached to an instance. The following values are required; name or instance_id, volume_id, and device. salt-cloud -a attach_volume myinstance volume_id=vol-12345 device=/dev/sdb1 Show a Volume The details about an existing volume may be retrieved. salt-cloud -a show_volume myinstance volume_id=vol-12345 salt-cloud -f show_volume ec2 volume_id=vol-12345 Detaching Volumes An existing volume may be detached from an instance. salt-cloud -a detach_volume myinstance volume_id=vol-12345 Deleting Volumes A volume that is not attached to an instance may be deleted. salt-cloud -f delete_volume ec2 volume_id=vol-12345 Managing Key Pairs The EC2 driver has the ability to manage key pairs. Creating a Key Pair A key pair is required in order to create an instance. When creating a key pair with this function, the return data will contain a copy of the private key. This private key is not stored by Amazon, will not be obtainable past this point, and should be stored immedi‐ ately. salt-cloud -f create_keypair ec2 keyname=mykeypair Importing a Key Pair salt-cloud -f import_keypair ec2 keyname=mykeypair file=/path/to/id_rsa.pub Show a Key Pair This function will show the details related to a key pair, not including the private key itself (which is not stored by Amazon). salt-cloud -f show_keypair ec2 keyname=mykeypair Delete a Key Pair This function removes the key pair from Amazon. salt-cloud -f delete_keypair ec2 keyname=mykeypair Launching instances into a VPC Simple launching into a VPC In the amazon web interface, identify the id of the subnet into which your image should be created. Then, edit your cloud.profiles file like so:- profile-id: provider: provider-name subnetid: subnet-XXXXXXXX image: ami-XXXXXXXX size: m1.medium ssh_username: ubuntu securitygroupid: - sg-XXXXXXXX Specifying interface properties New in version 2014.7.0. Launching into a VPC allows you to specify more complex configurations for the network interfaces of your virtual machines, for example:- profile-id: provider: provider-name image: ami-XXXXXXXX size: m1.medium ssh_username: ubuntu # Do not include either 'subnetid' or 'securitygroupid' here if you are # going to manually specify interface configuration # network_interfaces: - DeviceIndex: 0 SubnetId: subnet-XXXXXXXX SecurityGroupId: - sg-XXXXXXXX # Uncomment this line if you would like to set an explicit private # IP address for the ec2 instance # # PrivateIpAddress: 192.168.1.66 # Uncomment this to associate an existing Elastic IP Address with # this network interface: # # associate_eip: eipalloc-XXXXXXXX # You can allocate more than one IP address to an interface. Use the # 'ip addr list' command to see them. # # SecondaryPrivateIpAddressCount: 2 # Uncomment this to allocate a new Elastic IP Address to this # interface (will be associated with the primary private ip address # of the interface # # allocate_new_eip: True # Uncomment this instead to allocate a new Elastic IP Address to # both the primary private ip address and each of the secondary ones # allocate_new_eips: True # Uncomment this if you're creating NAT instances. Allows an instance # to accept IP packets with destinations other than itself. # SourceDestCheck: False Note that it is an error to assign a 'subnetid' or 'securitygroupid' to a profile where the interfaces are manually configured like this. These are both really properties of each network interface, not of the machine itself. Getting Started With GoGrid GoGrid is a public cloud host that supports Linux and Windows. Configuration To use Salt Cloud with GoGrid log into the GoGrid web interface and create an API key. Do this by clicking on "My Account" and then going to the API Keys tab. The apikey and the sharedsecret configuration parameters need to be set in the configura‐ tion file to enable interfacing with GoGrid: # Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my-gogrid-config: driver: gogrid apikey: asdff7896asdh789 sharedsecret: saltybacon NOTE: A Note about using Map files with GoGrid: Due to limitations in the GoGrid API, instances cannot be provisioned in parallel with the GoGrid driver. Map files will work with GoGrid, but the -P argument should not be used on maps referencing GoGrid instances. NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Profiles Cloud Profiles Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.pro‐ files.d/ directory: gogrid_512: provider: my-gogrid-config size: 512MB image: CentOS 6.2 (64-bit) w/ None Sizes can be obtained using the --list-sizes option for the salt-cloud command: # salt-cloud --list-sizes my-gogrid-config my-gogrid-config: ---------- gogrid: ---------- 512MB: ---------- bandwidth: None disk: 30 driver: get_uuid: id: 512MB name: 512MB price: 0.095 ram: 512 uuid: bde1e4d7c3a643536e42a35142c7caac34b060e9 ...SNIP... Images can be obtained using the --list-images option for the salt-cloud command: # salt-cloud --list-images my-gogrid-config my-gogrid-config: ---------- gogrid: ---------- CentOS 6.4 (64-bit) w/ None: ---------- driver: extra: ---------- get_uuid: id: 18094 name: CentOS 6.4 (64-bit) w/ None uuid: bfd4055389919e01aa6261828a96cf54c8dcc2c4 ...SNIP... Assigning IPs New in version 2015.8.0. The GoGrid API allows IP addresses to be manually assigned. Salt Cloud supports this func‐ tionality by allowing an IP address to be specified using the assign_public_ip argument. This likely makes the most sense inside a map file, but it may also be used inside a pro‐ file. gogrid_512: provider: my-gogrid-config size: 512MB image: CentOS 6.2 (64-bit) w/ None assign_public_ip: 11.38.257.42 Getting Started With Google Compute Engine Google Compute Engine (GCE) is Google-infrastructure as a service that lets you run your large-scale computing workloads on virtual machines. This document covers how to use Salt Cloud to provision and manage your virtual machines hosted within Google's infrastructure. You can find out more about GCE and other Google Cloud Platform services at https://cloud.google.com. Dependencies · LibCloud >= 0.14.1 · A Google Cloud Platform account with Compute Engine enabled · A registered Service Account for authorization · Oh, and obviously you'll need salt Google Compute Engine Setup 1. Sign up for Google Cloud Platform Go to https://cloud.google.com and use your Google account to sign up for Google Cloud Platform and complete the guided instructions. 2. Create a Project Next, go to the console at https://cloud.google.com/console and create a new Project. Make sure to select your new Project if you are not automatically directed to the Project. Projects are a way of grouping together related users, services, and billing. You may opt to create multiple Projects and the remaining instructions will need to be com‐ pleted for each Project if you wish to use GCE and Salt Cloud to manage your virtual machines. 3. Enable the Google Compute Engine service In your Project, either just click Compute Engine to the left, or go to the APIs & auth section and APIs link and enable the Google Compute Engine service. 4. Create a Service Account To set up authorization, navigate to APIs & auth section and then the Credentials link and click the CREATE NEW CLIENT ID button. Select Service Account and click the Create Client ID button. This will automatically download a .json file, which may or may not be used in later steps, depending on your version of libcloud. Look for a new Service Account section in the page and record the generated email address for the matching key/fingerprint. The email address will be used in the ser‐ vice_account_email_address of the /etc/salt/cloud.providers or the /etc/salt/cloud.providers.d/*.conf file. 5. Key Format NOTE: If you are using libcloud >= 0.17.0 it is recommended that you use the JSON format file you downloaded above and skip to the Provider Configuration section below, using the JSON file in place of 'NEW.pem' in the documentation. If you are using an older version of libcloud or are unsure of the version you have, please follow the instructions below to generate and format a new P12 key. In the new Service Account section, click Generate new P12 key, which will automati‐ cally download a .p12 private key file. The .p12 private key needs to be converted to a format compatible with libcloud. This new Google-generated private key was encrypted using notasecret as a passphrase. Use the following command and record the location of the converted private key and record the location for use in the service_account_pri‐ vate_key of the /etc/salt/cloud file: openssl pkcs12 -in ORIG.p12 -passin pass:notasecret \ -nodes -nocerts | openssl rsa -out NEW.pem Provider Configuration Set up the provider cloud config at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/*.conf: gce-config: # Set up the Project name and Service Account authorization project: "your-project-id" service_account_email_address: "@developer.gserviceaccount.com" service_account_private_key: "/path/to/your/NEW.pem" # Set up the location of the salt master minion: master: saltmaster.example.com # Set up grains information, which will be common for all nodes # using this provider grains: node_type: broker release: 1.0.1 driver: gce NOTE: The value provided for project must not contain underscores or spaces and is labeled as "Project ID" on the Google Developers Console. NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Profile Configuration Set up an initial profile at /etc/salt/cloud.profiles or /etc/salt/cloud.pro‐ files.d/*.conf: my-gce-profile: image: centos-6 size: n1-standard-1 location: europe-west1-b network: default tags: '["one", "two", "three"]' metadata: '{"one": "1", "2": "two"}' use_persistent_disk: True delete_boot_pd: False deploy: True make_master: False provider: gce-config The profile can be realized now with a salt command: salt-cloud -p my-gce-profile gce-instance This will create an salt minion instance named gce-instance in GCE. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with a salt-minion installed, connectivity to it can be verified with Salt: salt gce-instance test.ping GCE Specific Settings Consult the sample profile below for more information about GCE specific settings. Some of them are mandatory and are properly labeled below but typically also include a hard-coded default. Initial Profile Set up an initial profile at /etc/salt/cloud.profiles or /etc/salt/cloud.pro‐ files.d/gce.conf: my-gce-profile: image: centos-6 size: n1-standard-1 location: europe-west1-b network: default tags: '["one", "two", "three"]' metadata: '{"one": "1", "2": "two"}' use_persistent_disk: True delete_boot_pd: False ssh_interface: public_ips external_ip: "ephemeral" image Image is used to define what Operating System image should be used to for the instance. Examples are Debian 7 (wheezy) and CentOS 6. Required. size A 'size', in GCE terms, refers to the instance's 'machine type'. See the on-line documen‐ tation for a complete list of GCE machine types. Required. location A 'location', in GCE terms, refers to the instance's 'zone'. GCE has the notion of both Regions (e.g. us-central1, europe-west1, etc) and Zones (e.g. us-central1-a, us-cen‐ tral1-b, etc). Required. network Use this setting to define the network resource for the instance. All GCE projects con‐ tain a network named 'default' but it's possible to use this setting to create instances belonging to a different network resource. tags GCE supports instance/network tags and this setting allows you to set custom tags. It should be a list of strings and must be parse-able by the python ast.literal_eval() func‐ tion to convert it to a python list. metadata GCE supports instance metadata and this setting allows you to set custom metadata. It should be a hash of key/value strings and parse-able by the python ast.literal_eval() function to convert it to a python dictionary. use_persistent_disk Use this setting to ensure that when new instances are created, they will use a persistent disk to preserve data between instance terminations and re-creations. delete_boot_pd In the event that you wish the boot persistent disk to be permanently deleted when you destroy an instance, set delete_boot_pd to True. ssh_interface New in version 2015.5.0. Specify whether to use public or private IP for deploy script. Valid options are: · private_ips: The salt-master is also hosted with GCE · public_ips: The salt-master is hosted outside of GCE external_ip Per instance setting: Used a named fixed IP address to this host. Valid options are: · ephemeral: The host will use a GCE ephemeral IP · None: No external IP will be configured on this host. Optionally, pass the name of a GCE address to use a fixed IP address. If the address does not already exist, it will be created. ex_disk_type GCE supports two different disk types, pd-standard and pd-ssd. The default disk type set‐ ting is pd-standard. To specify using an SSD disk, set pd-ssd as the value. New in version 2014.7.0. ip_forwarding GCE instances can be enabled to use IP Forwarding. When set to True, this options allows the instance to send/receive non-matching src/dst packets. Default is False. New in version 2015.8.1. Profile with scopes Scopes can be specified by setting the optional ex_service_accounts key in your cloud pro‐ file. The following example enables the bigquery scope. my-gce-profile: image: centos-6 ssh_username: salt size: f1-micro location: us-central1-a network: default tags: '["one", "two", "three"]' metadata: '{"one": "1", "2": "two", "sshKeys": ""}' use_persistent_disk: True delete_boot_pd: False deploy: False make_master: False provider: gce-config ex_service_accounts: - scopes: - bigquery Email can also be specified as an (optional) parameter. my-gce-profile: ...snip ex_service_accounts: - scopes: - bigquery email: default There can be multiple entries for scopes since ex-service_accounts accepts a list of dic‐ tionaries. For more information refer to the libcloud documentation on specifying service account scopes. SSH Remote Access GCE instances do not allow remote access to the root user by default. Instead, another user must be used to run the deploy script using sudo. Append something like this to /etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/*.conf: my-gce-profile: ... # SSH to GCE instances as gceuser ssh_username: gceuser # Use the local private SSH key file located here ssh_keyfile: /etc/cloud/google_compute_engine If you have not already used this SSH key to login to instances in this GCE project you will also need to add the public key to your projects metadata at https://cloud.google.com/console. You could also add it via the metadata setting too: my-gce-profile: ... metadata: '{"one": "1", "2": "two", "sshKeys": "gceuser:ssh-rsa <Your SSH Public Key> gceuser@host"}' Single instance details This action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. salt-cloud -a show_instance myinstance Destroy, persistent disks, and metadata As noted in the provider configuration, it's possible to force the boot persistent disk to be deleted when you destroy the instance. The way that this has been implemented is to use the instance metadata to record the cloud profile used when creating the instance. When destroy is called, if the instance contains a salt-cloud-profile key, it's value is used to reference the matching profile to determine if delete_boot_pd is set to True. Be aware that any GCE instances created with salt cloud will contain this custom salt-cloud-profile metadata entry. List various resources It's also possible to list several GCE resources similar to what can be done with other providers. The following commands can be used to list GCE zones (locations), machine types (sizes), and images. salt-cloud --list-locations gce salt-cloud --list-sizes gce salt-cloud --list-images gce Persistent Disk The Compute Engine provider provides functions via salt-cloud to manage your Persistent Disks. You can create and destroy disks as well as attach and detach them from running instances. Create When creating a disk, you can create an empty disk and specify its size (in GB), or spec‐ ify either an 'image' or 'snapshot'. salt-cloud -f create_disk gce disk_name=pd location=us-central1-b size=200 Delete Deleting a disk only requires the name of the disk to delete salt-cloud -f delete_disk gce disk_name=old-backup Attach Attaching a disk to an existing instance is really an 'action' and requires both an instance name and disk name. It's possible to use this ation to create bootable persistent disks if necessary. Compute Engine also supports attaching a persistent disk in READ_ONLY mode to multiple instances at the same time (but then cannot be attached in READ_WRITE to any instance). salt-cloud -a attach_disk myinstance disk_name=pd mode=READ_WRITE boot=yes Detach Detaching a disk is also an action against an instance and only requires the name of the disk. Note that this does not safely sync and umount the disk from the instance. To ensure no data loss, you must first make sure the disk is unmounted from the instance. salt-cloud -a detach_disk myinstance disk_name=pd Show disk It's also possible to look up the details for an existing disk with either a function or an action. salt-cloud -a show_disk myinstance disk_name=pd salt-cloud -f show_disk gce disk_name=pd Create snapshot You can take a snapshot of an existing disk's content. The snapshot can then in turn be used to create other persistent disks. Note that to prevent data corruption, it is strongly suggested that you unmount the disk prior to taking a snapshot. You must name the snapshot and provide the name of the disk. salt-cloud -f create_snapshot gce name=backup-20140226 disk_name=pd Delete snapshot You can delete a snapshot when it's no longer needed by specifying the name of the snap‐ shot. salt-cloud -f delete_snapshot gce name=backup-20140226 Show snapshot Use this function to look up information about the snapshot. salt-cloud -f show_snapshot gce name=backup-20140226 Networking Compute Engine supports multiple private networks per project. Instances within a private network can easily communicate with each other by an internal DNS service that resolves instance names. Instances within a private network can also communicate with either directly without needing special routing or firewall rules even if they span different regions/zones. Networks also support custom firewall rules. By default, traffic between instances on the same private network is open to all ports and protocols. Inbound SSH traffic (port 22) is also allowed but all other inbound traffic is blocked. Create network New networks require a name and CIDR range. New instances can be created and added to this network by setting the network name during create. It is not possible to add/remove exist‐ ing instances to a network. salt-cloud -f create_network gce name=mynet cidr=10.10.10.0/24 Destroy network Destroy a network by specifying the name. Make sure that there are no instances associated with the network prior to deleting it or you'll have a bad day. salt-cloud -f delete_network gce name=mynet Show network Specify the network name to view information about the network. salt-cloud -f show_network gce name=mynet Create address Create a new named static IP address in a region. salt-cloud -f create_address gce name=my-fixed-ip region=us-central1 Delete address Delete an existing named fixed IP address. salt-cloud -f delete_address gce name=my-fixed-ip region=us-central1 Show address View details on a named address. salt-cloud -f show_address gce name=my-fixed-ip region=us-central1 Create firewall You'll need to create custom firewall rules if you want to allow other traffic than what is described above. For instance, if you run a web service on your instances, you'll need to explicitly allow HTTP and/or SSL traffic. The firewall rule must have a name and it will use the 'default' network unless otherwise specified with a 'network' attribute. Firewalls also support instance tags for source/destination salt-cloud -f create_fwrule gce name=web allow=tcp:80,tcp:443,icmp Delete firewall Deleting a firewall rule will prevent any previously allowed traffic for the named fire‐ wall rule. salt-cloud -f delete_fwrule gce name=web Show firewall Use this function to review an existing firewall rule's information. salt-cloud -f show_fwrule gce name=web Load Balancer Compute Engine possess a load-balancer feature for splitting traffic across multiple instances. Please reference the documentation for a more complete discription. The load-balancer functionality is slightly different than that described in Google's doc‐ umentation. The concept of TargetPool and ForwardingRule are consolidated in salt-cloud/libcloud. HTTP Health Checks are optional. HTTP Health Check HTTP Health Checks can be used as a means to toggle load-balancing across instance mem‐ bers, or to detect if an HTTP site is functioning. A common use-case is to set up a health check URL and if you want to toggle traffic on/off to an instance, you can tempo‐ rarily have it return a non-200 response. A non-200 response to the load-balancer's health check will keep the LB from sending any new traffic to the "down" instance. Once the instance's health check URL beings returning 200-responses, the LB will again start to send traffic to it. Review Compute Engine's documentation for allowable parameters. You can use the following salt-cloud functions to manage your HTTP health checks. salt-cloud -f create_hc gce name=myhc path=/ port=80 salt-cloud -f delete_hc gce name=myhc salt-cloud -f show_hc gce name=myhc Load-balancer When creating a new load-balancer, it requires a name, region, port range, and list of members. There are other optional parameters for protocol, and list of health checks. Deleting or showing details about the LB only requires the name. salt-cloud -f create_lb gce name=lb region=... ports=80 members=w1,w2,w3 salt-cloud -f delete_lb gce name=lb salt-cloud -f show_lb gce name=lb You can also create a load balancer using a named fixed IP addressby specifying the name of the address. If the address does not exist yet it will be created. salt-cloud -f create_lb gce name=my-lb region=us-central1 ports=234 members=s1,s2,s3 addre ↲ ss=my-lb-ip Attach and Detach LB It is possible to attach or detach an instance from an existing load-balancer. Both the instance and load-balancer must exist before using these functions. salt-cloud -f attach_lb gce name=lb member=w4 salt-cloud -f detach_lb gce name=lb member=oops Getting Started With HP Cloud HP Cloud is a major public cloud platform and uses the libcloud openstack driver. The cur‐ rent version of OpenStack that HP Cloud uses is Havana. When an instance is booted, it must have a floating IP added to it in order to connect to it and further below you will see an example that adds context to this statement. Set up a cloud provider configuration file To use the openstack driver for HP Cloud, set up the cloud provider configuration file as in the example shown below: /etc/salt/cloud.providers.d/hpcloud.conf: hpcloud-config: # Set the location of the salt-master # minion: master: saltmaster.example.com # Configure HP Cloud using the OpenStack plugin # identity_url: https://region-b.geo-1.identity.hpcloudsvc.com:35357/v2.0/tokens compute_name: Compute protocol: ipv4 # Set the compute region: # compute_region: region-b.geo-1 # Configure HP Cloud authentication credentials # user: myname tenant: myname-project1 password: xxxxxxxxx # keys to allow connection to the instance launched # ssh_key_name: yourkey ssh_key_file: /path/to/key/yourkey.priv driver: openstack The subsequent example that follows is using the openstack driver. NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Compute Region Originally, HP Cloud, in its OpenStack Essex version (1.0), had 3 availability zones in one region, US West (region-a.geo-1), which each behaved each as a region. This has since changed, and the current OpenStack Havana version of HP Cloud (1.1) now has simplified this and now has two regions to choose from: region-a.geo-1 -> US West region-b.geo-1 -> US East Authentication The user is the same user as is used to log into the HP Cloud management UI. The tenant can be found in the upper left under "Project/Region/Scope". It is often named the same as user albeit with a -project1 appended. The password is of course what you created your account with. The management UI also has other information such as being able to select US East or US West. Set up a cloud profile config file The profile shown below is a know working profile for an Ubuntu instance. The profile con‐ figuration file is stored in the following location: /etc/salt/cloud.profiles.d/hp_ae1_ubuntu.conf: hp_ae1_ubuntu: provider: hp_ae1 image: 9302692b-b787-4b52-a3a6-daebb79cb498 ignore_cidr: 10.0.0.1/24 networks: - floating: Ext-Net size: standard.small ssh_key_file: /root/keys/test.key ssh_key_name: test ssh_username: ubuntu Some important things about the example above: · The image parameter can use either the image name or image ID which you can obtain by running in the example below (this case US East): # salt-cloud --list-images hp_ae1 · The parameter ignore_cidr specifies a range of addresses to ignore when trying to con‐ nect to the instance. In this case, it's the range of IP addresses used for an private IP of the instance. · The parameter networks is very important to include. In previous versions of Salt Cloud, this is what made it possible for salt-cloud to be able to attach a floating IP to the instance in order to connect to the instance and set up the minion. The current version of salt-cloud doesn't require it, though having it is of no harm either. Newer versions of salt-cloud will use this, and without it, will attempt to find a list of floating IP addresses to use regardless. · The ssh_key_file and ssh_key_name are the keys that will make it possible to connect to the instance to set up the minion · The ssh_username parameter, in this case, being that the image used will be ubuntu, will make it possible to not only log in but install the minion Launch an instance To instantiate a machine based on this profile (example): # salt-cloud -p hp_ae1_ubuntu ubuntu_instance_1 After several minutes, this will create an instance named ubuntu_instance_1 running in HP Cloud in the US East region and will set up the minion and then return information about the instance once completed. Manage the instance Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: # salt ubuntu_instance_1 ping SSH to the instance Additionally, the instance can be accessed via SSH using the floating IP assigned to it # ssh ubuntu@<floating ip> Using a private IP Alternatively, in the cloud profile, using the private IP to log into the instance to set up the minion is another option, particularly if salt-cloud is running within the cloud on an instance that is on the same network with all the other instances (minions) The example below is a modified version of the previous example. Note the use of ssh_interface: hp_ae1_ubuntu: provider: hp_ae1 image: 9302692b-b787-4b52-a3a6-daebb79cb498 size: standard.small ssh_key_file: /root/keys/test.key ssh_key_name: test ssh_username: ubuntu ssh_interface: private_ips With this setup, salt-cloud will use the private IP address to ssh into the instance and set up the salt-minion Getting Started With Joyent Joyent is a public cloud host that supports SmartOS, Linux, FreeBSD, and Windows. Dependencies This driver requires the Python requests library to be installed. Configuration The Joyent cloud requires three configuration parameters. The user name and password that are used to log into the Joyent system, and the location of the private ssh key associated with the Joyent account. The ssh key is needed to send the provisioning commands up to the freshly created virtual machine. # Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my-joyent-config: driver: joyent user: fred password: saltybacon private_key: /root/mykey.pem keyname: mykey NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Profiles Cloud Profiles Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.pro‐ files.d/ directory: joyent_512 provider: my-joyent-config size: Extra Small 512 MB image: Arch Linux 2013.06 Sizes can be obtained using the --list-sizes option for the salt-cloud command: # salt-cloud --list-sizes my-joyent-config my-joyent-config: ---------- joyent: ---------- Extra Small 512 MB: ---------- default: false disk: 15360 id: Extra Small 512 MB memory: 512 name: Extra Small 512 MB swap: 1024 vcpus: 1 ...SNIP... Images can be obtained using the --list-images option for the salt-cloud command: # salt-cloud --list-images my-joyent-config my-joyent-config: ---------- joyent: ---------- base: ---------- description: A 32-bit SmartOS image with just essential packages installed. Ideal for users who are comfortable with setting up their own environment and tools. disabled: False files: ---------- - compression: bzip2 - sha1: 40cdc6457c237cf6306103c74b5f45f5bf2d9bbe - size: 82492182 name: base os: smartos owner: 352971aa-31ba-496c-9ade-a379feaecd52 public: True ...SNIP... SmartDataCenter This driver can also be used with the Joyent SmartDataCenter project. More details can be found at: Using SDC requires that an api_host_suffix is set. The default value for this is .api.joyentcloud.com. All characters, including the leading ., should be included: api_host_suffix: .api.myhostname.com Miscellaneous Configuration The following configuration items can be set in either provider or profile confuration files. use_ssl When set to True (the default), attach https:// to any URL that does not already have http:// or https:// included at the beginning. The best practice is to leave the protocol out of the URL, and use this setting to manage it. verify_ssl When set to True (the default), the underlying web library will verify the SSL certifi‐ cate. This should only be set to False for debugging.` Getting Started With LXC The LXC module is designed to install Salt in an LXC container on a controlled and possi‐ bly remote minion. In other words, Salt will connect to a minion, then from that minion: · Provision and configure a container for networking access · Use those modules to deploy salt and re-attach to master. · lxc runner · lxc module · seed Limitations · You can only act on one minion and one provider at a time. · Listing images must be targeted to a particular LXC provider (nothing will be outputted with all) Operation Salt's LXC support does use lxc.init via the lxc.cloud_init_interface and seeds the minion via seed.mkconfig. You can provide to those lxc VMs a profile and a network profile like if you were directly using the minion module. Order of operation: · Create the LXC container on the desired minion (clone or template) · Change LXC config options (if any need to be changed) · Start container · Change base passwords if any · Change base DNS configuration if necessary · Wait for LXC container to be up and ready for ssh · Test SSH connection and bailout in error · Upload deploy script and seeds, then re-attach the minion. Provider configuration Here is a simple provider configuration: # Note: This example goes in /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. devhost10-lxc: target: devhost10 driver: lxc NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Profile configuration Please read tutorial-lxc before anything else. And specially tutorial-lxc-profiles. Here are the options to configure your containers: target Host minion id to install the lxc Container into lxc_profile Name of the profile or inline options for the LXC vm creation/cloning, please see tutorial-lxc-profiles-container. network_profile Name of the profile or inline options for the LXC vm network settings, please see tutorial-lxc-profiles-network. nic_opts Totally optional. Per interface new-style configuration options mappings which will override any profile default option: eth0: {'mac': '00:16:3e:01:29:40', 'gateway': None, (default) 'link': 'br0', (default) 'gateway': None, (default) 'netmask': '', (default) 'ip': '22.1.4.25'}} password password for root and sysadmin users dnsservers List of DNS servers to use. This is optional. minion minion configuration (see Minion Configuration in Salt Cloud) bootstrap_delay specify the time to wait (in seconds) between container creation and salt boot‐ strap execution. It is useful to ensure that all essential services have started before the bootstrap script is executed. By default there's no wait time between container creation and bootstrap unless you are on systemd where we wait that the system is no more in starting state. bootstrap_shell shell for bootstraping script (default: /bin/sh) script defaults to salt-boostrap script_args arguments which are given to the bootstrap script. the {0} placeholder will be replaced by the path which contains the minion config and key files, eg: script_args="-c {0}" Using profiles: # Note: This example would go in /etc/salt/cloud.profiles or any file in the # /etc/salt/cloud.profiles.d/ directory. devhost10-lxc: provider: devhost10-lxc lxc_profile: foo network_profile: bar minion: master: 10.5.0.1 master_port: 4506 Using inline profiles (eg to override the network bridge): devhost11-lxc: provider: devhost10-lxc lxc_profile: clone_from: foo network_profile: etho: link: lxcbr0 minion: master: 10.5.0.1 master_port: 4506 Using a lxc template instead of a clone: devhost11-lxc: provider: devhost10-lxc lxc_profile: template: ubuntu # options: # release: trusty network_profile: etho: link: lxcbr0 minion: master: 10.5.0.1 master_port: 4506 Static ip: # Note: This example would go in /etc/salt/cloud.profiles or any file in the # /etc/salt/cloud.profiles.d/ directory. devhost10-lxc: provider: devhost10-lxc nic_opts: eth0: ipv4: 10.0.3.9 minion: master: 10.5.0.1 master_port: 4506 DHCP: # Note: This example would go in /etc/salt/cloud.profiles or any file in the # /etc/salt/cloud.profiles.d/ directory. devhost10-lxc: provider: devhost10-lxc minion: master: 10.5.0.1 master_port: 4506 Driver Support · Container creation · Image listing (LXC templates) · Running container information (IP addresses, etc.) Getting Started With Linode Linode is a public cloud host with a focus on Linux instances. Starting with the 2015.8.0 release of Salt, the Linode driver uses Linode's native REST API. There are no external dependencies required to use the Linode driver. Configuration Linode requires a single API key, but the default root password for new instances also needs to be set: # Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my-linode-config: apikey: asldkgfakl;sdfjsjaslfjaklsdjf;askldjfaaklsjdfhasldsadfghdkf password: F00barbaz ssh_pubkey: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKHEOLLbeXgaqRQT9NBAopVz366SdYc0KKX33vAn ↲ q+2R user@host ssh_key_file: ~/.ssh/id_ed25519 driver: linode The password needs to be 8 characters and contain lowercase, uppercase, and numbers. NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Profiles Cloud Profiles Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.pro‐ files.d/ directory: linode_1024: provider: my-linode-config size: Linode 2048 image: CentOS 7 location: London, England, UK Sizes can be obtained using the --list-sizes option for the salt-cloud command: # salt-cloud --list-sizes my-linode-config my-linode-config: ---------- linode: ---------- Linode 1024: ---------- bandwidth: 2000 disk: 49152 driver: get_uuid: id: 1 name: Linode 1024 price: 20.0 ram: 1024 uuid: 03e18728ce4629e2ac07c9cbb48afffb8cb499c4 ...SNIP... Images can be obtained using the --list-images option for the salt-cloud command: # salt-cloud --list-images my-linode-config my-linode-config: ---------- linode: ---------- Arch Linux 2013.06: ---------- driver: extra: ---------- 64bit: 1 pvops: 1 get_uuid: id: 112 name: Arch Linux 2013.06 uuid: 8457f92eaffc92b7666b6734a96ad7abe1a8a6dd ...SNIP... Locations can be obtained using the --list-locations option for the salt-cloud command: # salt-cloud --list-locations my-linode-config my-linode-config: ---------- linode: ---------- Atlanta, GA, USA: ---------- abbreviation: atlanta id: 4 Dallas, TX, USA: ---------- abbreviation: dallas id: 2 ...SNIP... Cloning When salt-cloud accesses Linode via linode-python it can clone machines. It is safest to clone a stopped machine. To stop a machine run salt-cloud -a stop machine_to_clone To create a new machine based on another machine, add an entry to your linode cloud pro‐ file that looks like this: li-clone: provider: my-linode-config clonefrom: machine_to_clone script_args: -C -F Then run salt-cloud as normal, specifying -p li-clone. The profile name can be anything; It doesn't have to be li-clone. clonefrom: is the name of an existing machine in Linode from which to clone. Script_args: -C -F is necessary to avoid re-deploying Salt via salt-bootstrap. -C will just re-deploy keys so the new minion will not have a duplicate key or minion_id on the Master, and -F will force a rewrite of the Minion config file on the new Minion. If -F isn't provided, the new Minion will have the machine_to_clone's Minion ID, instead of its own Minion ID, which can cause problems. NOTE: Pull Request #733 to the salt-bootstrap repo makes the -F argument non-necessary. Once that change is released into a stable version of the Bootstrap Script, the -C argument will be sufficient for the script_args setting. If the machine_to_clone does not have Salt installed on it, refrain from using the script_args: -C -F altogether, because the new machine will need to have Salt installed. Getting Started With OpenStack OpenStack is one the most popular cloud projects. It's an open source project to build public and/or private clouds. You can use Salt Cloud to launch OpenStack instances. Dependencies · Libcloud >= 0.13.2 Configuration · Using the new format, set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/openstack.conf: my-openstack-config: # Set the location of the salt-master # minion: master: saltmaster.example.com # Configure the OpenStack driver # identity_url: http://identity.youopenstack.com/v2.0/tokens compute_name: nova protocol: ipv4 compute_region: RegionOne # Configure Openstack authentication credentials # user: myname password: 123456 # tenant is the project name tenant: myproject driver: openstack # skip SSL certificate validation (default false) insecure: false NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Using nova client to get information from OpenStack One of the best ways to get information about OpenStack is using the novaclient python package (available in pypi as python-novaclient). The client configuration is a set of environment variables that you can get from the Dashboard. Log in and then go to Project -> Access & security -> API Access and download the "OpenStack RC file". Then: source /path/to/your/rcfile nova credentials nova endpoints In the nova endpoints output you can see the information about compute_region and com‐ pute_name. Compute Region It depends on the OpenStack cluster that you are using. Please, have a look at the previ‐ ous sections. Authentication The user and password is the same user as is used to log into the OpenStack Dashboard. Profiles Here is an example of a profile: openstack_512: provider: my-openstack-config size: m1.tiny image: cirros-0.3.1-x86_64-uec ssh_key_file: /tmp/test.pem ssh_key_name: test ssh_interface: private_ips The following list explains some of the important properties. size can be one of the options listed in the output of nova flavor-list. image can be one of the options listed in the output of nova image-list. ssh_key_file The SSH private key that the salt-cloud uses to SSH into the VM after its first booted in order to execute a command or script. This private key's public key must be the openstack public key inserted into the authorized_key's file of the VM's root user account. ssh_key_name The name of the openstack SSH public key that is inserted into the authorized_keys file of the VM's root user account. Prior to using this public key, you must use openstack commands or the horizon web UI to load that key into the tenant's account. Note that this openstack tenant must be the one you defined in the cloud provider. ssh_interface This option allows you to create a VM without a public IP. If this option is omit‐ ted and the VM does not have a public IP, then the salt-cloud waits for a certain period of time and then destroys the VM. With the nova drive, private cloud net‐ works can be defined here. For more information concerning cloud profiles, see here. change_password If no ssh_key_file is provided, and the server already exists, change_password will use the api to change the root password of the server so that it can be bootstrapped. change_password: True userdata_file Use userdata_file to specify the userdata file to upload for use with cloud-init if avail‐ able. userdata_file: /etc/salt/cloud-init/packages.yml Getting Started With Parallels Parallels Cloud Server is a product by Parallels that delivers a cloud hosting solution. The PARALLELS module for Salt Cloud enables you to manage instances hosted using PCS. Fur‐ ther information can be found at: http://www.parallels.com/products/pcs/ · Using the old format, set up the cloud configuration at /etc/salt/cloud: # Set up the location of the salt master # minion: master: saltmaster.example.com # Set the PARALLELS access credentials (see below) # PARALLELS.user: myuser PARALLELS.password: badpass # Set the access URL for your PARALLELS host # PARALLELS.url: https://api.cloud.xmission.com:4465/paci/v1.0/ · Using the new format, set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/parallels.conf: my-parallels-config: # Set up the location of the salt master # minion: master: saltmaster.example.com # Set the PARALLELS access credentials (see below) # user: myuser password: badpass # Set the access URL for your PARALLELS provider # url: https://api.cloud.xmission.com:4465/paci/v1.0/ driver: parallels NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Access Credentials The user, password, and url will be provided to you by your cloud host. These are all required in order for the PARALLELS driver to work. Cloud Profiles Set up an initial profile at /etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/paral‐ lels.conf: parallels-ubuntu: provider: my-parallels-config image: ubuntu-12.04-x86_64 The profile can be realized now with a salt command: # salt-cloud -p parallels-ubuntu myubuntu This will create an instance named myubuntu on the cloud host. The minion that is installed on this instance will have an id of myubuntu. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: # salt myubuntu test.ping Required Settings The following settings are always required for PARALLELS: · Using the old cloud configuration format: PARALLELS.user: myuser PARALLELS.password: badpass PARALLELS.url: https://api.cloud.xmission.com:4465/paci/v1.0/ · Using the new cloud configuration format: my-parallels-config: user: myuser password: badpass url: https://api.cloud.xmission.com:4465/paci/v1.0/ driver: parallels Optional Settings Unlike other cloud providers in Salt Cloud, Parallels does not utilize a size setting. This is because Parallels allows the end-user to specify a more detailed configuration for their instances than is allowed by many other cloud hosts. The following options are available to be used in a profile, with their default settings listed. # Description of the instance. Defaults to the instance name. desc: <instance_name> # How many CPU cores, and how fast they are (in MHz) cpu_number: 1 cpu_power: 1000 # How many megabytes of RAM ram: 256 # Bandwidth available, in kbps bandwidth: 100 # How many public IPs will be assigned to this instance ip_num: 1 # Size of the instance disk (in GiB) disk_size: 10 # Username and password ssh_username: root password: <value from PARALLELS.password> # The name of the image, from ``salt-cloud --list-images parallels`` image: ubuntu-12.04-x86_64 Getting Started With Proxmox Proxmox Virtual Environment is a complete server virtualization management solution, based on KVM virtualization and OpenVZ containers. Further information can be found at: http://www.proxmox.org/ Dependencies · IPy >= 0.81 · requests >= 2.2.1 Please note: This module allows you to create both OpenVZ and KVM but installing Salt on it will only be done when the VM is an OpenVZ container rather than a KVM virtual machine. · Set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/proxmox.conf: my-proxmox-config: # Set up the location of the salt master # minion: master: saltmaster.example.com # Set the PROXMOX access credentials (see below) # user: myuser@pve password: badpass # Set the access URL for your PROXMOX host # url: your.proxmox.host driver: proxmox NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Access Credentials The user, password, and url will be provided to you by your cloud host. These are all required in order for the PROXMOX driver to work. Cloud Profiles Set up an initial profile at /etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/prox‐ mox.conf: · Configure a profile to be used: proxmox-ubuntu: provider: my-proxmox-config image: local:vztmpl/ubuntu-12.04-standard_12.04-1_amd64.tar.gz technology: openvz # host needs to be set to the configured name of the proxmox host # and not the ip address or FQDN of the server host: myvmhost ip_address: 192.168.100.155 password: topsecret The profile can be realized now with a salt command: # salt-cloud -p proxmox-ubuntu myubuntu This will create an instance named myubuntu on the cloud host. The minion that is installed on this instance will have a hostname of myubuntu. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: # salt myubuntu test.ping Required Settings The following settings are always required for PROXMOX: · Using the new cloud configuration format: my-proxmox-config: driver: proxmox user: saltcloud@pve password: xyzzy url: your.proxmox.host Optional Settings Unlike other cloud providers in Salt Cloud, Proxmox does not utilize a size setting. This is because Proxmox allows the end-user to specify a more detailed configuration for their instances, than is allowed by many other cloud providers. The following options are avail‐ able to be used in a profile, with their default settings listed. # Description of the instance. desc: <instance_name> # How many CPU cores, and how fast they are (in MHz) cpus: 1 cpuunits: 1000 # How many megabytes of RAM memory: 256 # How much swap space in MB swap: 256 # Whether to auto boot the vm after the host reboots onboot: 1 # Size of the instance disk (in GiB) disk: 10 # Host to create this vm on host: myvmhost # Nameservers. Defaults to host nameserver: 8.8.8.8 8.8.4.4 # Username and password ssh_username: root password: <value from PROXMOX.password> # The name of the image, from ``salt-cloud --list-images proxmox`` image: local:vztmpl/ubuntu-12.04-standard_12.04-1_amd64.tar.gz Getting Started With Rackspace Rackspace is a major public cloud platform which may be configured using either the rackspace or the openstack driver, depending on your needs. Please note that the rackspace driver is intended only for 1st gen instances, aka, "the old cloud" at Rackspace. It is required for 1st gen instances, but will not work with OpenStack-based instances. Unless you explicitly have a reason to use it, it is highly recommended that you use the openstack driver instead. Dependencies · Libcloud >= 0.13.2 Configuration To use the openstack driver (recommended), set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/rackspace.conf: my-rackspace-config: # Set the location of the salt-master # minion: master: saltmaster.example.com # Configure Rackspace using the OpenStack plugin # identity_url: 'https://identity.api.rackspacecloud.com/v2.0/tokens' compute_name: cloudServersOpenStack protocol: ipv4 # Set the compute region: # compute_region: DFW # Configure Rackspace authentication credentials # user: myname tenant: 123456 apikey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx driver: openstack To use the rackspace driver, set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/rackspace.conf: my-rackspace-config: driver: rackspace # The Rackspace login user user: fred # The Rackspace user's apikey apikey: 901d3f579h23c8v73q9 The settings that follow are for using Rackspace with the openstack driver, and will not work with the rackspace driver. NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Compute Region Rackspace currently has six compute regions which may be used: DFW -> Dallas/Forth Worth ORD -> Chicago SYD -> Sydney LON -> London IAD -> Northern Virginia HKG -> Hong Kong Note: Currently the LON region is only available with a UK account, and UK accounts cannot access other regions Authentication The user is the same user as is used to log into the Rackspace Control Panel. The tenant and apikey can be found in the API Keys area of the Control Panel. The apikey will be labeled as API Key (and may need to be generated), and tenant will be labeled as Cloud Account Number. An initial profile can be configured in /etc/salt/cloud.profiles or /etc/salt/cloud.pro‐ files.d/rackspace.conf: openstack_512: provider: my-rackspace-config size: 512 MB Standard image: Ubuntu 12.04 LTS (Precise Pangolin) To instantiate a machine based on this profile: # salt-cloud -p openstack_512 myinstance This will create a virtual machine at Rackspace with the name myinstance. This operation may take several minutes to complete, depending on the current load at the Rackspace data center. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: # salt myinstance test.ping RackConnect Environments Rackspace offers a hybrid hosting configuration option called RackConnect that allows you to use a physical firewall appliance with your cloud servers. When this service is in use the public_ip assigned by nova will be replaced by a NAT ip on the firewall. For salt-cloud to work properly it must use the newly assigned "access ip" instead of the Nova assigned public ip. You can enable that capability by adding this to your profiles: openstack_512: provider: my-openstack-config size: 512 MB Standard image: Ubuntu 12.04 LTS (Precise Pangolin) rackconnect: True Managed Cloud Environments Rackspace offers a managed service level of hosting. As part of the managed service level you have the ability to choose from base of lamp installations on cloud server images. The post build process for both the base and the lamp installations used Chef to install things such as the cloud monitoring agent and the cloud backup agent. It also takes care of installing the lamp stack if selected. In order to prevent the post installation process from stomping over the bootstrapping you can add the below to your profiles. openstack_512: provider: my-rackspace-config size: 512 MB Standard image: Ubuntu 12.04 LTS (Precise Pangolin) managedcloud: True First and Next Generation Images Rackspace provides two sets of virtual machine images, first, and next generation. As of 0.8.9 salt-cloud will default to using the next generation images. To force the use of first generation images, on the profile configuration please add: FreeBSD-9.0-512: provider: my-rackspace-config size: 512 MB Standard image: FreeBSD 9.0 force_first_gen: True Private Subnets By default salt-cloud will not add Rackspace private networks to new servers. To enable a private network to a server instantiated by salt cloud, add the following section to the provider file (typically /etc/salt/cloud.providers.d/rackspace.conf) networks: - fixed: # This is the private network - private-network-id # This is Rackspace's "PublicNet" - 00000000-0000-0000-0000-000000000000 # This is Rackspace's "ServiceNet" - 11111111-1111-1111-1111-111111111111 To get the Rackspace private network ID, go to Networking, Networks and hover over the private network name. The order of the networks in the above code block does not map to the order of the ether‐ net devices on newly created servers. Public IP will always be first ( eth0 ) followed by servicenet ( eth1 ) and then private networks. Enabling the private network per above gives the option of using the private subnet for all master-minion communication, including the bootstrap install of salt-minion. To enable the minion to use the private subnet, update the master: line in the minion: sec‐ tion of the providers file. To configure the master to only listen on the private subnet IP, update the interface: line in the /etc/salt/master file to be the private subnet IP of the salt master. Getting Started With Saltify The Saltify driver is a new, experimental driver for installing Salt on existing machines (virtual or bare metal). Dependencies The Saltify driver has no external dependencies. Configuration Because the Saltify driver does not use an actual cloud provider host, it has a simple provider configuration. The only thing that is required to be set is the driver name, and any other potentially useful information, like the location of the salt-master: # Note: This example is for /etc/salt/cloud.providers file or any file in # the /etc/salt/cloud.providers.d/ directory. my-saltify-config: minion: master: 111.222.333.444 provider: saltify Profiles Saltify requires a profile to be configured for each machine that needs Salt installed. The initial profile can be set up at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory. Each profile requires both an ssh_host and an ssh_username key parameter as well as either an key_filename or a password. Profile configuration example: # /etc/salt/cloud.profiles.d/saltify.conf salt-this-machine: ssh_host: 12.34.56.78 ssh_username: root key_filename: '/etc/salt/mysshkey.pem' provider: my-saltify-config The machine can now be "Salted" with the following command: salt-cloud -p salt-this-machine my-machine This will install salt on the machine specified by the cloud profile, salt-this-machine, and will give the machine the minion id of my-machine. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Once a salt-minion has been successfully installed on the instance, connectivity to it can be verified with Salt: salt my-machine test.ping Using Map Files The settings explained in the section above may also be set in a map file. An example of how to use the Saltify driver with a map file follows: # /etc/salt/saltify-map make_salty: - my-instance-0: ssh_host: 12.34.56.78 ssh_username: root password: very-bad-password - my-instance-1: ssh_host: 44.33.22.11 ssh_username: root password: another-bad-pass Note: When using a cloud map with the Saltify driver, the name of the profile to use, in this case make_salty, must be defined in a profile config. For example: # /etc/salt/cloud.profiles.d/saltify.conf make_salty: provider: my-saltify-config The machines listed in the map file can now be "Salted" by applying the following salt map command: salt-cloud -m /etc/salt/saltify-map This command will install salt on the machines specified in the map and will give each machine their minion id of my-instance-0 and my-instance-1, respectively. If the command was executed on the salt-master, its Salt key will automatically be signed on the master. Connectivity to the new "Salted" instances can now be verified with Salt: salt 'my-instance-*' test.ping Getting Started With Scaleway Scaleway is the first IaaS host worldwide to offer an ARM based cloud. It’s the ideal platform for horizontal scaling with BareMetal SSD servers. The solution provides on demand resources: it comes with on-demand SSD storage, movable IPs , images, security group and an Object Storage solution. https://scaleway.com Configuration Using Salt for Scaleway, requires an access key and an API token. API tokens are unique identifiers associated with your Scaleway account. To retrieve your access key and API token, log-in to the Scaleway control panel, open the pull-down menu on your account name and click on "My Credentials" link. If you do not have API token you can create one by clicking the "Create New Token" button on the right corner. # Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my-scaleway-config: access_key: 15cf404d-4560-41b1-9a0c-21c3d5c4ff1f token: a7347ec8-5de1-4024-a5e3-24b77d1ba91d driver: scaleway NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Profiles Cloud Profiles Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.pro‐ files.d/ directory: scalewa-ubuntu: provider: my-scaleway-config image: Ubuntu Trusty (14.04 LTS) Images can be obtained using the --list-images option for the salt-cloud command: #salt-cloud --list-images my-scaleway-config my-scaleway-config: ---------- scaleway: ---------- 069fd876-eb04-44ab-a9cd-47e2fa3e5309: ---------- arch: arm creation_date: 2015-03-12T09:35:45.764477+00:00 default_bootscript: {u'kernel': {u'dtb': u'', u'title': u'Pimouss 3.2.34-30-std', u'id': u'cfda4 ↲ 308-cd6f-4e51-9744-905fc0da370f', u'path': u'kernel/pimouss-uImage-3.2.34-30-std'}, u'title': u'3.2.34-std #30 (stable)', u'id': u'c5af0215-2516-4316-befc-5da1cfad609c', u'initrd': {u'path': u'initrd/c1-uInitrd', u'id': u'1be14b1b-e24c-48e5-b0b6-7ba452e42b92', u'title': u'C1 initrd'}, u'bootcmdargs': {u'id': u'd22c4dde-e5a4-47ad-abb9-d23b54d542ff', u'value': u'ip=dhcp boot=local root=/dev/nbd0 USE_XNBD=1 nbd.max_parts=8'}, u'organization': u'11111111-1111-4111-8111-111111111111', u'public': True} extra_volumes: [] id: 069fd876-eb04-44ab-a9cd-47e2fa3e5309 modification_date: 2015-04-24T12:02:16.820256+00:00 name: Ubuntu Vivid (15.04) organization: a283af0b-d13e-42e1-a43f-855ffbf281ab public: True root_volume: {u'name': u'distrib-ubuntu-vivid-2015-03-12_10:32-snapshot', u'id': u'a6d02e ↲ 63-8dee-4bce-b627-b21730f35a05', u'volume_type': u'l_ssd', u'size': 50000000000L} ... Execute a query and return all information about the nodes running on configured cloud providers using the -Q option for the salt-cloud command: # salt-cloud -F [INFO ] salt-cloud starting [INFO ] Starting new HTTPS connection (1): api.scaleway.com my-scaleway-config: ---------- scaleway: ---------- salt-manager: ---------- creation_date: 2015-06-03T08:17:38.818068+00:00 hostname: salt-manager ... NOTE: Additional documentation about Scaleway can be found at https://www.scaleway.com/docs. Getting Started With SoftLayer SoftLayer is a public cloud host, and baremetal hardware hosting service. Dependencies The SoftLayer driver for Salt Cloud requires the softlayer package, which is available at PyPI: https://pypi.python.org/pypi/SoftLayer This package can be installed using pip or easy_install: # pip install softlayer # easy_install softlayer Configuration Set up the cloud config at /etc/salt/cloud.providers: # Note: These examples are for /etc/salt/cloud.providers my-softlayer: # Set up the location of the salt master minion: master: saltmaster.example.com # Set the SoftLayer access credentials (see below) user: MYUSER1138 apikey: 'e3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9' driver: softlayer my-softlayer-hw: # Set up the location of the salt master minion: master: saltmaster.example.com # Set the SoftLayer access credentials (see below) user: MYUSER1138 apikey: 'e3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9' driver: softlayer_hw NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Access Credentials The user setting is the same user as is used to log into the SoftLayer Administration area. The apikey setting is found inside the Admin area after logging in: · Hover over the Account menu item. · Click the Users link. · Find the API Key column and click View. Profiles Cloud Profiles Set up an initial profile at /etc/salt/cloud.profiles: base_softlayer_ubuntu: provider: my-softlayer image: UBUNTU_LATEST cpu_number: 1 ram: 1024 disk_size: 100 local_disk: True hourly_billing: True domain: example.com location: sjc01 # Optional max_net_speed: 1000 private_vlan: 396 private_network: True private_ssh: True # May be used _instead_of_ image global_identifier: 320d8be5-46c0-dead-cafe-13e3c51 Most of the above items are required; optional items are specified below. image Images to build an instance can be found using the --list-images option: # salt-cloud --list-images my-softlayer The setting used will be labeled as template. cpu_number This is the number of CPU cores that will be used for this instance. This number may be dependent upon the image that is used. For instance: Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (1 - 4 Core): ---------- name: Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (1 - 4 Core) template: REDHAT_6_64 Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (5 - 100 Core): ---------- name: Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (5 - 100 Core) template: REDHAT_6_64 Note that the template (meaning, the image option) for both of these is the same, but the names suggests how many CPU cores are supported. ram This is the amount of memory, in megabytes, that will be allocated to this instance. disk_size The amount of disk space that will be allocated to this image, in gigabytes. base_softlayer_ubuntu: disk_size: 100 Using Multiple Disks New in version 2015.8.1. SoftLayer allows up to 5 disks to be specified for a virtual machine upon creation. Multi‐ ple disks can be specified either as a list or a comma-delimited string. The first disk_size specified in the string or list will be the first disk size assigned to the VM. List Example: base_softlayer_ubuntu: disk_size: ['100', '20', '20'] String Example: base_softlayer_ubuntu: disk_size: '100, 20, 20' local_disk When true the disks for the computing instance will be provisioned on the host which it runs, otherwise SAN disks will be provisioned. hourly_billing When true the computing instance will be billed on hourly usage, otherwise it will be billed on a monthly basis. domain The domain name that will be used in the FQDN (Fully Qualified Domain Name) for this instance. The domain setting will be used in conjunction with the instance name to form the FQDN. location Images to build an instance can be found using the --list-locations option: # salt-cloud --list-location my-softlayer max_net_speed Specifies the connection speed for the instance's network components. This setting is optional. By default, this is set to 10. post_uri Specifies the uri location of the script to be downloaded and run after the instance is provisioned. New in version 2015.8.1. Example: base_softlayer_ubuntu: post_uri: 'https://SOMESERVERIP:8000/myscript.sh' public_vlan If it is necessary for an instance to be created within a specific frontend VLAN, the ID for that VLAN can be specified in either the provider or profile configuration. This ID can be queried using the list_vlans function, as described below. This setting is optional. private_vlan If it is necessary for an instance to be created within a specific backend VLAN, the ID for that VLAN can be specified in either the provider or profile configuration. This ID can be queried using the list_vlans function, as described below. This setting is optional. private_network If a server is to only be used internally, meaning it does not have a public VLAN associ‐ ated with it, this value would be set to True. This setting is optional. The default is False. private_ssh Whether to run the deploy script on the server using the public IP address or the private IP address. If set to True, Salt Cloud will attempt to SSH into the new server using the private IP address. The default is False. This settiong is optional. global_identifier When creating an instance using a custom template, this option is set to the corresponding value obtained using the list_custom_images function. This option will not be used if an image is set, and if an image is not set, it is required. The profile can be realized now with a salt command: # salt-cloud -p base_softlayer_ubuntu myserver Using the above configuration, this will create myserver.example.com. Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt: # salt 'myserver.example.com' test.ping Cloud Profiles Set up an initial profile at /etc/salt/cloud.profiles: base_softlayer_hw_centos: provider: my-softlayer-hw # CentOS 6.0 - Minimal Install (64 bit) image: 13963 # 2 x 2.0 GHz Core Bare Metal Instance - 2 GB Ram size: 1921 # 500GB SATA II hdd: 1267 # San Jose 01 location: 168642 domain: example.com # Optional vlan: 396 port_speed: 273 banwidth: 248 Most of the above items are required; optional items are specified below. image Images to build an instance can be found using the --list-images option: # salt-cloud --list-images my-softlayer-hw A list of id`s and names will be provided. The `name will describe the operating system and architecture. The id will be the setting to be used in the profile. size Sizes to build an instance can be found using the --list-sizes option: # salt-cloud --list-sizes my-softlayer-hw A list of id`s and names will be provided. The `name will describe the speed and quantity of CPU cores, and the amount of memory that the hardware will contain. The id will be the setting to be used in the profile. hdd There is currently only one size of hard disk drive (HDD) that is available for hardware instances on SoftLayer: 1267: 500GB SATA II The hdd setting in the profile should be 1267. Other sizes may be added in the future. location Locations to build an instance can be found using the --list-images option: # salt-cloud --list-locations my-softlayer-hw A list of IDs and names will be provided. The location will describe the location in human terms. The id will be the setting to be used in the profile. domain The domain name that will be used in the FQDN (Fully Qualified Domain Name) for this instance. The domain setting will be used in conjunction with the instance name to form the FQDN. vlan If it is necessary for an instance to be created within a specific VLAN, the ID for that VLAN can be specified in either the provider or profile configuration. This ID can be queried using the list_vlans function, as described below. port_speed Specifies the speed for the instance's network port. This setting refers to an ID within the SoftLayer API, which sets the port speed. This setting is optional. The default is 273, or, 100 Mbps Public & Private Networks. The following settings are available: · 273: 100 Mbps Public & Private Networks · 274: 1 Gbps Public & Private Networks · 21509: 10 Mbps Dual Public & Private Networks (up to 20 Mbps) · 21513: 100 Mbps Dual Public & Private Networks (up to 200 Mbps) · 2314: 1 Gbps Dual Public & Private Networks (up to 2 Gbps) · 272: 10 Mbps Public & Private Networks bandwidth Specifies the network bandwidth available for the instance. This setting refers to an ID within the SoftLayer API, which sets the bandwidth. This setting is optional. The default is 248, or, 5000 GB Bandwidth. The following settings are available: · 248: 5000 GB Bandwidth · 129: 6000 GB Bandwidth · 130: 8000 GB Bandwidth · 131: 10000 GB Bandwidth · 36: Unlimited Bandwidth (10 Mbps Uplink) · 125: Unlimited Bandwidth (100 Mbps Uplink) Actions The following actions are currently supported by the SoftLayer Salt Cloud driver. show_instance This action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. $ salt-cloud -a show_instance myinstance Functions The following functions are currently supported by the SoftLayer Salt Cloud driver. list_vlans This function lists all VLANs associated with the account, and all known data from the SoftLayer API concerning those VLANs. $ salt-cloud -f list_vlans my-softlayer $ salt-cloud -f list_vlans my-softlayer-hw The id returned in this list is necessary for the vlan option when creating an instance. list_custom_images This function lists any custom templates associated with the account, that can be used to create a new instance. $ salt-cloud -f list_custom_images my-softlayer The globalIdentifier returned in this list is necessary for the global_identifier option when creating an image using a custom template. Optional Products for SoftLayer HW The softlayer_hw driver supports the ability to add optional products, which are supported by SoftLayer's API. These products each have an ID associated with them, that can be passed into Salt Cloud with the optional_products option: softlayer_hw_test: provider: my-softlayer-hw # CentOS 6.0 - Minimal Install (64 bit) image: 13963 # 2 x 2.0 GHz Core Bare Metal Instance - 2 GB Ram size: 1921 # 500GB SATA II hdd: 1267 # San Jose 01 location: 168642 domain: example.com optional_products: # MySQL for Linux - id: 28 # Business Continuance Insurance - id: 104 These values can be manually obtained by looking at the source of an order page on the SoftLayer web interface. For convenience, many of these values are listed here: Public Secondary IP Addresses · 22: 4 Public IP Addresses · 23: 8 Public IP Addresses Primary IPv6 Addresses · 17129: 1 IPv6 Address Public Static IPv6 Addresses · 1481: /64 Block Static Public IPv6 Addresses OS-Specific Addon · 17139: XenServer Advanced for XenServer 6.x · 17141: XenServer Enterprise for XenServer 6.x · 2334: XenServer Advanced for XenServer 5.6 · 2335: XenServer Enterprise for XenServer 5.6 · 13915: Microsoft WebMatrix · 21276: VMware vCenter 5.1 Standard Control Panel Software · 121: cPanel/WHM with Fantastico and RVskin · 20778: Parallels Plesk Panel 11 (Linux) 100 Domain w/ Power Pack · 20786: Parallels Plesk Panel 11 (Windows) 100 Domain w/ Power Pack · 20787: Parallels Plesk Panel 11 (Linux) Unlimited Domain w/ Power Pack · 20792: Parallels Plesk Panel 11 (Windows) Unlimited Domain w/ Power Pack · 2340: Parallels Plesk Panel 10 (Linux) 100 Domain w/ Power Pack · 2339: Parallels Plesk Panel 10 (Linux) Unlimited Domain w/ Power Pack · 13704: Parallels Plesk Panel 10 (Windows) Unlimited Domain w/ Power Pack Database Software · 29: MySQL 5.0 for Windows · 28: MySQL for Linux · 21501: Riak 1.x · 20893: MongoDB · 30: Microsoft SQL Server 2005 Express · 92: Microsoft SQL Server 2005 Workgroup · 90: Microsoft SQL Server 2005 Standard · 94: Microsoft SQL Server 2005 Enterprise · 1330: Microsoft SQL Server 2008 Express · 1340: Microsoft SQL Server 2008 Web · 1337: Microsoft SQL Server 2008 Workgroup · 1334: Microsoft SQL Server 2008 Standard · 1331: Microsoft SQL Server 2008 Enterprise · 2179: Microsoft SQL Server 2008 Express R2 · 2173: Microsoft SQL Server 2008 Web R2 · 2183: Microsoft SQL Server 2008 Workgroup R2 · 2180: Microsoft SQL Server 2008 Standard R2 · 2176: Microsoft SQL Server 2008 Enterprise R2 Anti-Virus & Spyware Protection · 594: McAfee VirusScan Anti-Virus - Windows · 414: McAfee Total Protection - Windows Insurance · 104: Business Continuance Insurance Monitoring · 55: Host Ping · 56: Host Ping and TCP Service Monitoring Notification · 57: Email and Ticket Advanced Monitoring · 2302: Monitoring Package - Basic · 2303: Monitoring Package - Advanced · 2304: Monitoring Package - Premium Application Response · 58: Automated Notification · 59: Automated Reboot from Monitoring · 60: 24x7x365 NOC Monitoring, Notification, and Response Intrusion Detection & Protection · 413: McAfee Host Intrusion Protection w/Reporting Hardware & Software Firewalls · 411: APF Software Firewall for Linux · 894: Microsoft Windows Firewall · 410: 10Mbps Hardware Firewall · 409: 100Mbps Hardware Firewall · 408: 1000Mbps Hardware Firewall Getting Started with VEXXHOST VEXXHOST is a cloud computing host which provides Canadian cloud computing services which are based in Monteral and use the libcloud OpenStack driver. VEXXHOST currently runs the Havana release of OpenStack. When provisioning new instances, they automatically get a public IP and private IP address. Therefore, you do not need to assign a floating IP to access your instance after it's booted. Cloud Provider Configuration To use the openstack driver for the VEXXHOST public cloud, you will need to set up the cloud provider configuration file as in the example below: /etc/salt/cloud.providers.d/vexxhost.conf: In order to use the VEXXHOST public cloud, you will need to setup a cloud provider configuration file as in the example below which uses the OpenStack driver. my-vexxhost-config: # Set the location of the salt-master # minion: master: saltmaster.example.com # Configure VEXXHOST using the OpenStack plugin # identity_url: http://auth.api.thenebulacloud.com:5000/v2.0/tokens compute_name: nova # Set the compute region: # compute_region: na-yul-nhs1 # Configure VEXXHOST authentication credentials # user: your-tenant-id password: your-api-key tenant: your-tenant-name # keys to allow connection to the instance launched # ssh_key_name: yourkey ssh_key_file: /path/to/key/yourkey.priv driver: openstack NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Authentication All of the authentication fields that you need can be found by logging into your VEXXHOST customer center. Once you've logged in, you will need to click on "CloudConsole" and then click on "API Credentials". Cloud Profile Configuration In order to get the correct image UUID and the instance type to use in the cloud profile, you can run the following command respectively: # salt-cloud --list-images=vexxhost-config # salt-cloud --list-sizes=vexxhost-config Once you have that, you can go ahead and create a new cloud profile. This profile will build an Ubuntu 12.04 LTS nb.2G instance. /etc/salt/cloud.profiles.d/vh_ubuntu1204_2G.conf: vh_ubuntu1204_2G: provider: my-vexxhost-config image: 4051139f-750d-4d72-8ef0-074f2ccc7e5a size: nb.2G Provision an instance To create an instance based on the sample profile that we created above, you can run the following salt-cloud command. # salt-cloud -p vh_ubuntu1204_2G vh_instance1 Typically, instances are provisioned in under 30 seconds on the VEXXHOST public cloud. After the instance provisions, it will be set up a minion and then return all the instance information once it's complete. Once the instance has been setup, you can test connectivity to it by running the following command: # salt vh_instance1 test.ping You can now continue to provision new instances and they will all automatically be set up as minions of the master you've defined in the configuration file. Getting Started With VMware New in version 2015.5.4. Author: Nitin Madhok <@clemson.edu> The VMware cloud module allows you to manage VMware ESX, ESXi, and vCenter. Dependencies The vmware module for Salt Cloud requires the pyVmomi package, which is available at PyPI: https://pypi.python.org/pypi/pyvmomi This package can be installed using pip or easy_install: pip install pyvmomi easy_install pyvmomi Configuration The VMware cloud module needs the vCenter URL, username and password to be set up in the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/vmware.conf: my-vmware-config: driver: vmware user: 'DOMAIN\user' password: 'verybadpass' url: '10.20.30.40' vcenter01: driver: vmware user: 'DOMAIN\user' password: 'verybadpass' url: 'vcenter01.domain.com' protocol: 'https' port: 443 vcenter02: driver: vmware user: 'DOMAIN\user' password: 'verybadpass' url: 'vcenter02.domain.com' protocol: 'http' port: 80 NOTE: Optionally, protocol and port can be specified if the vCenter server is not using the defaults. Default is protocol: https and port: 443. NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Profiles Set up an initial profile at /etc/salt/cloud.profiles or /etc/salt/cloud.pro‐ files.d/vmware.conf: vmware-centos6.5: provider: vcenter01 clonefrom: test-vm ## Optional arguments num_cpus: 4 memory: 8GB devices: cd: CD/DVD drive 1: device_type: datastore_iso_file iso_path: "[nap004-1] vmimages/tools-isoimages/linux.iso" CD/DVD drive 2: device_type: client_device mode: atapi CD/DVD drive 3: device_type: client_device mode: passthrough disk: Hard disk 1: size: 30 Hard disk 2: size: 20 Hard disk 3: size: 5 network: Network adapter 1: name: 10.20.30-400-Test switch_type: standard ip: 10.20.30.123 gateway: [10.20.30.110] subnet_mask: 255.255.255.128 domain: example.com Network adapter 2: name: 10.30.40-500-Dev-DHCP adapter_type: e1000 switch_type: distributed Network adapter 3: name: 10.40.50-600-Prod adapter_type: vmxnet3 switch_type: distributed ip: 10.40.50.123 gateway: [10.40.50.110] subnet_mask: 255.255.255.128 domain: example.com scsi: SCSI controller 1: type: lsilogic SCSI controller 2: type: lsilogic_sas bus_sharing: virtual SCSI controller 3: type: paravirtual bus_sharing: physical domain: example.com dns_servers: - 123.127.255.240 - 123.127.255.241 - 123.127.255.242 # If cloning from template, either resourcepool or cluster MUST be specified! resourcepool: Resources cluster: Prod datastore: HUGE-DATASTORE-Cluster folder: Development datacenter: DC1 host: c4212n-002.domain.com template: False power_on: True extra_config: mem.hotadd: 'yes' guestinfo.foo: bar guestinfo.domain: foobar.com guestinfo.customVariable: customValue deploy: True private_key: /root/.ssh/mykey.pem ssh_username: cloud-user password: veryVeryBadPassword minion: master: 123.127.193.105 file_map: /path/to/local/custom/script: /path/to/remote/script /path/to/local/file: /path/to/remote/file /srv/salt/yum/epel.repo: /etc/yum.repos.d/epel.repo hardware_version: 10 provider Enter the name that was specified when the cloud provider config was created. clonefrom Enter the name of the VM/template to clone from. num_cpus Enter the number of vCPUS that you want the VM/template to have. If not specified, the current VM/template's vCPU count is used. memory Enter the memory size (in MB or GB) that you want the VM/template to have. If not specified, the current VM/template's memory size is used. Example memory: 8GB or memory: 8192MB. devices Enter the device specifications here. Currently, the following devices can be cre‐ ated or reconfigured: cd Enter the CD/DVD drive specification here. If the CD/DVD drive doesn't exist, it will be created with the specified configuration. If the CD/DVD drive already exists, it will be reconfigured with the specifications. The following options can be specified per CD/DVD drive: device_type Specify how the CD/DVD drive should be used. Currently supported types are client_device and datastore_iso_file. Default is device_type: client_device iso_path Enter the path to the iso file present on the datastore only if device_type: datastore_iso_file. The syntax to specify this is iso_path: "[datastoreName] vmimages/tools-isoimages/linux.iso". This field is ignored if device_type: client_device mode Enter the mode of connection only if device_type: client_device. Cur‐ rently supported modes are passthrough and atapi. This field is ignored if device_type: datastore_iso_file. Default is mode: passthrough disk Enter the disk specification here. If the hard disk doesn't exist, it will be created with the provided size. If the hard disk already exists, it will be expanded if the provided size is greater than the current size of the disk. network Enter the network adapter specification here. If the network adapter doesn't exist, a new network adapter will be created with the specified network name, type and other configuration. If the network adapter already exists, it will be reconfigured with the specifications. The following additional options can be specified per network adapter (See example above): name Enter the network name you want the network adapter to be mapped to. adapter_type Enter the network adapter type you want to create. Currently sup‐ ported types are vmxnet, vmxnet2, vmxnet3, e1000 and e1000e. If no type is specified, by default vmxnet3 will be used. switch_type Enter the type of switch to use. This decides whether to use a stan‐ dard switch network or a distributed virtual portgroup. Currently supported types are standard for standard portgroups and distributed for distributed virtual portgroups. ip Enter the static IP you want the network adapter to be mapped to. If the network specified is DHCP enabled, you do not have to specify this. gateway Enter the gateway for the network as a list. If the network specified is DHCP enabled, you do not have to specify this. subnet_mask Enter the subnet mask for the network. If the network specified is DHCP enabled, you do not have to specify this. domain Enter the domain to be used with the network adapter. If the network specified is DHCP enabled, you do not have to specify this. scsi Enter the SCSI adapter specification here. If the SCSI adapter doesn't exist, a new SCSI adapter will be created of the specified type. If the SCSI adapter already exists, it will be reconfigured with the specifications. The following additional options can be specified per SCSI adapter: type Enter the SCSI adapter type you want to create. Currently supported types are lsilogic, lsilogic_sas and paravirtual. Type must be speci‐ fied when creating a new SCSI adapter. bus_sharing Specify this if sharing of virtual disks between virtual machines is desired. The following can be specified: virtual Virtual disks can be shared between virtual machines on the same server. physical Virtual disks can be shared between virtual machines on any server. no Virtual disks cannot be shared between virtual machines. domain Enter the global domain name to be used for DNS. If not specified and if the VM name is a FQDN, domain is set to the domain from the VM name. Default is local. dns_servers Enter the list of DNS servers to use in order of priority. resourcepool Enter the name of the resourcepool to which the new virtual machine should be attached. This determines what compute resources will be available to the clone. NOTE: · For a clone operation from a virtual machine, it will use the same resource‐ pool as the original virtual machine unless specified. · For a clone operation from a template to a virtual machine, specifying either this or cluster is required. If both are specified, the resourcepool value will be used. · For a clone operation to a template, this argument is ignored. cluster Enter the name of the cluster whose resource pool the new virtual machine should be attached to. NOTE: · For a clone operation from a virtual machine, it will use the same cluster's resourcepool as the original virtual machine unless specified. · For a clone operation from a template to a virtual machine, specifying either this or resourcepool is required. If both are specified, the resourcepool value will be used. · For a clone operation to a template, this argument is ignored. datastore Enter the name of the datastore or the datastore cluster where the virtual machine should be located on physical storage. If not specified, the current datastore is used. NOTE: · If you specify a datastore cluster name, DRS Storage recommendation is auto‐ matically applied. · If you specify a datastore name, DRS Storage recommendation is disabled. folder Enter the name of the folder that will contain the new virtual machine. NOTE: · For a clone operation from a VM/template, the new VM/template will be added to the same folder that the original VM/template belongs to unless specified. · If both folder and datacenter are specified, the folder value will be used. datacenter Enter the name of the datacenter that will contain the new virtual machine. NOTE: · For a clone operation from a VM/template, the new VM/template will be added to the same folder that the original VM/template belongs to unless specified. · If both folder and datacenter are specified, the folder value will be used. host Enter the name of the target host where the virtual machine should be registered. If not specified: NOTE: · If resource pool is not specified, current host is used. · If resource pool is specified, and the target pool represents a stand-alone host, the host is used. · If resource pool is specified, and the target pool represents a DRS-enabled cluster, a host selected by DRS is used. · If resource pool is specified and the target pool represents a cluster without DRS enabled, an InvalidArgument exception be thrown. template Specifies whether the new virtual machine should be marked as a template or not. Default is template: False. power_on Specifies whether the new virtual machine should be powered on or not. If template: True is set, this field is ignored. Default is power_on: True. extra_config Specifies the additional configuration information for the virtual machine. This describes a set of modifications to the additional options. If the key is already present, it will be reset with the new value provided. Otherwise, a new option is added. Keys with empty values will be removed. deploy Specifies if salt should be installed on the newly created VM. Default is True so salt will be installed using the bootstrap script. If template: True or power_on: False is set, this field is ignored and salt will not be installed. private_key Specify the path to the private key to use to be able to ssh to the VM. ssh_username Specify the username to use in order to ssh to the VM. Default is root password Specify a password to use in order to ssh to the VM. If private_key is specified, you do not need to specify this. minion Specify custom minion configuration you want the salt minion to have. A good exam‐ ple would be to specify the master as the IP/DNS name of the master. file_map Specify file/files you want to copy to the VM before the bootstrap script is run and salt is installed. A good example of using this would be if you need to put custom repo files on the server in case your server will be in a private network and cannot reach external networks. hardware_version Specify the virtual hardware version for the vm/template that is supported by the host. customization Specify whether the new virtual machine should be customized or not. If customiza‐ tion: False is set, the new virtual machine will not be customized. Default is customization: True. Getting Started With vSphere NOTE: Deprecated since version Carbon: The vsphere cloud driver has been deprecated in favor of the vmware cloud driver and will be removed in Salt Carbon. Please refer to Getting started with VMware instead to get started with the configuration. VMware vSphere is a management platform for virtual infrastructure and cloud computing. Dependencies The vSphere module for Salt Cloud requires the PySphere package, which is available at PyPI: https://pypi.python.org/pypi/pysphere This package can be installed using pip or easy_install: # pip install pysphere # easy_install pysphere Configuration Set up the cloud config at /etc/salt/cloud.providers or in the /etc/salt/cloud.providers.d/ directory: my-vsphere-config: driver: vsphere # Set the vSphere access credentials user: marco password: polo # Set the URL of your vSphere server url: 'vsphere.example.com' NOTE: Changed in version 2015.8.0. The provider parameter in cloud provider definitions was renamed to driver. This change was made to avoid confusion with the provider parameter that is used in cloud profile definitions. Cloud provider definitions now use driver to refer to the Salt cloud mod‐ ule that provides the underlying functionality to connect to a cloud host, while cloud profiles continue to use provider to refer to provider configurations that you define. Profiles Cloud Profiles vSphere uses a Managed Object Reference to identify objects located in vCenter. The MOR ID's are used when configuring a vSphere cloud profile. Use the following reference when locating the MOR's for the cloud profile. http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=101 ↲ 7126&sliceId=1&docTypeID=DT_KB_1_1&dialogID=520386078&stateId=1%200%20520388386 Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d directory: vsphere-centos: provider: my-vsphere-config image: centos # Optional datastore: datastore-15 resourcepool: resgroup-8 folder: salt-cloud host: host-9 template: False provider Enter the name that was specified when the cloud provider profile was created. image Images available to build an instance can be found using the --list-images option: # salt-cloud --list-images my-vsphere-config datastore The MOR of the datastore where the virtual machine should be located. If not specified, the current datastore is used. resourcepool The MOR of the resourcepool to be used for the new vm. If not set, it will use the same resourcepool as the original vm. folder Name of the folder that will contain the new VM. If not set, the VM will be added to the folder the original VM belongs to. host The MOR of the host where the vm should be registered. If not specified: · if resourcepool is not specified, the current host is used. · if resourcepool is specified, and the target pool represents a stand-alone host, the host is used. · if resourcepool is specified, and the target pool represents a DRS-enabled cluster, a host selected by DRS is used. · if resourcepool is specified, and the target pool represents a cluster without DRS enabled, an InvalidArgument exception will be thrown. template Specifies whether or not the new virtual machine should be marked as a template. Default is False. Miscellaneous Options Miscellaneous Salt Cloud Options This page describes various miscellaneous options available in Salt Cloud Deploy Script Arguments Custom deploy scripts are unlikely to need custom arguments to be passed to them, but salt-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script: ec2-amazon: provider: my-ec2-config image: ami-1624987f size: t1.micro ssh_username: ec2-user script: bootstrap-salt script_args: -c /tmp/ This has also been tested to work with pipes, if needed: script_args: | head Selecting the File Transport By default, Salt Cloud uses SFTP to transfer files to Linux hosts. However, if SFTP is not available, or specific SCP functionality is needed, Salt Cloud can be configured to use SCP instead. file_transport: sftp file_transport: scp Sync After Install Salt allows users to create custom modules, grains, and states which can be synchronised to minions to extend Salt with further functionality. This option will inform Salt Cloud to synchronise your custom modules, grains, states or all these to the minion just after it has been created. For this to happen, the following line needs to be added to the main cloud configuration file: sync_after_install: all The available options for this setting are: modules grains states all Setting Up New Salt Masters It has become increasingly common for users to set up multi-hierarchal infrastructures using Salt Cloud. This sometimes involves setting up an instance to be a master in addi‐ tion to a minion. With that in mind, you can now lay down master configuration on a machine by specifying master options in the profile or map file. make_master: True This will cause Salt Cloud to generate master keys for the instance, and tell salt-boot‐ strap to install the salt-master package, in addition to the salt-minion package. The default master configuration is usually appropriate for most users, and will not be changed unless specific master configuration has been added to the profile or map: master: user: root interface: 0.0.0.0 Setting Up a Salt Syndic with Salt Cloud In addition to setting up new Salt Masters, syndic`s can also be provisioned using Salt Cloud. In order to set up a Salt Syndic via Salt Cloud, a Salt Master needs to be installed on the new machine and a master configuration file needs to be set up using the ``make_master` setting. This setting can be defined either in a profile config file or in a map file: make_master: True To install the Salt Syndic, the only other specification that needs to be configured is the master_syndic key to specify the location of the master that the syndic will be reporting to. This modification needs to be placed in the master setting, which can be configured either in the profile, provider, or /etc/salt/cloud config file: master: master_syndic: 123.456.789 # may be either an IP address or a hostname Many other Salt Syndic configuration settings and specifications can be passed through to the new syndic machine via the master configuration setting. See the syndic documentation for more information. SSH Port By default ssh port is set to port 22. If you want to use a custom port in provider, pro‐ file, or map blocks use ssh_port option. New in version 2015.5.0. ssh_port: 2222 Delete SSH Keys When Salt Cloud deploys an instance, the SSH pub key for the instance is added to the known_hosts file for the user that ran the salt-cloud command. When an instance is deployed, a cloud host generally recycles the IP address for the instance. When Salt Cloud attempts to deploy an instance using a recycled IP address that has previously been accessed from the same machine, the old key in the known_hosts file will cause a conflict. In order to mitigate this issue, Salt Cloud can be configured to remove old keys from the known_hosts file when destroying the node. In order to do this, the following line needs to be added to the main cloud configuration file: delete_sshkeys: True Keeping /tmp/ Files When Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt-boot‐ strap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the --keep-tmp option can be added: salt-cloud -p myprofile mymachine --keep-tmp For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable). Hide Output From Minion Install By default Salt Cloud will stream the output from the minion deploy script directly to STDOUT. Although this can been very useful, in certain cases you may wish to switch this off. The following config option is there to enable or disable this output: display_ssh_output: False Connection Timeout There are several stages when deploying Salt where Salt Cloud needs to wait for something to happen. The VM getting it's IP address, the VM's SSH port is available, etc. If you find that the Salt Cloud defaults are not enough and your deployment fails because Salt Cloud did not wait log enough, there are some settings you can tweak. Note All settings should be provided in lowercase All values should be provided in seconds You can tweak these settings globally, per cloud provider, or event per profile defini‐ tion. wait_for_ip_timeout The amount of time Salt Cloud should wait for a VM to start and get an IP back from the cloud host. Default: varies by cloud provider ( between 5 and 25 minutes) wait_for_ip_interval The amount of time Salt Cloud should sleep while querying for the VM's IP. Default: varies by cloud provider ( between .5 and 10 seconds) ssh_connect_timeout The amount of time Salt Cloud should wait for a successful SSH connection to the VM. Default: varies by cloud provider (between 5 and 15 minutes) wait_for_passwd_timeout The amount of time until an ssh connection can be established via password or ssh key. Default: varies by cloud provider (mostly 15 seconds) wait_for_passwd_maxtries The number of attempts to connect to the VM until we abandon. Default: 15 attempts wait_for_fun_timeout Some cloud drivers check for an available IP or a successful SSH connection using a func‐ tion, namely, SoftLayer, and SoftLayer-HW. So, the amount of time Salt Cloud should retry such functions before failing. Default: 15 minutes. wait_for_spot_timeout The amount of time Salt Cloud should wait before an EC2 Spot instance is available. This setting is only available for the EC2 cloud driver. Default: 10 minutes Salt Cloud Cache Salt Cloud can maintain a cache of node data, for supported providers. The following options manage this functionality. update_cachedir On supported cloud providers, whether or not to maintain a cache of nodes returned from a --full-query. The data will be stored in msgpack format under <SALT_CACHEDIR>/cloud/active/<DRIVER>/<PROVIDER>/<NODE_NAME>.p. This setting can be True or False. diff_cache_events When the cloud cachedir is being managed, if differences are encountered between the data that is returned live from the cloud host and the data in the cache, fire events which describe the changes. This setting can be True or False. Some of these events will contain data which describe a node. Because some of the fields returned may contain sensitive data, the cache_event_strip_fields configuration option exists to strip those fields from the event return. cache_event_strip_fields: - password - priv_key The following are events that can be fired based on this data. salt/cloud/minionid/cache_node_new A new node was found on the cloud host which was not listed in the cloud cachedir. A dict describing the new node will be contained in the event. salt/cloud/minionid/cache_node_missing A node that was previously listed in the cloud cachedir is no longer available on the cloud host. salt/cloud/minionid/cache_node_diff One or more pieces of data in the cloud cachedir has changed on the cloud host. A dict containing both the old and the new data will be contained in the event. SSH Known Hosts Normally when bootstrapping a VM, salt-cloud will ignore the SSH host key. This is because it does not know what the host key is before starting (because it doesn't exist yet). If strict host key checking is turned on without the key in the known_hosts file, then the host will never be available, and cannot be bootstrapped. If a provider is able to determine the host key before trying to bootstrap it, that provider's driver can add it to the known_hosts file, and then turn on strict host key checking. This can be set up in the main cloud configuration file (normally /etc/salt/cloud) or in the provider-specific configuration file: known_hosts_file: /path/to/.ssh/known_hosts If this is not set, it will default to /dev/null, and strict host key checking will be turned off. It is highly recommended that this option is not set, unless the user has verified that the provider supports this functionality, and that the image being used is capable of pro‐ viding the necessary information. At this time, only the EC2 driver supports this func‐ tionality. SSH Agent New in version 2015.5.0. If the ssh key is not stored on the server salt-cloud is being run on, set ssh_agent, and salt-cloud will use the forwarded ssh-agent to authenticate. ssh_agent: True File Map Upload New in version 2014.7.0. The file_map option allows an arbitrary group of files to be uploaded to the target system before running the deploy script. This functionality requires a provider uses salt.utils.cloud.bootstrap(), which is currently limited to the ec2, gce, openstack and nova drivers. The file_map can be configured globally in /etc/salt/cloud, or in any cloud provider or profile file. For example, to upload an extra package or a custom deploy script, a cloud profile using file_map might look like: ubuntu14: provider: ec2-config image: ami-98aa1cf0 size: t1.micro ssh_username: root securitygroup: default file_map: /local/path/to/custom/script: /remote/path/to/use/custom/script /local/path/to/package: /remote/path/to/store/package Troubleshooting Steps Troubleshooting Salt Cloud This page describes various steps for troubleshooting problems that may arise while using Salt Cloud. Virtual Machines Are Created, But Do Not Respond Are TCP ports 4505 and 4506 open on the master? This is easy to overlook on new masters. Information on how to open firewall ports on various platforms can be found here. Generic Troubleshooting Steps This section describes a set of instructions that are useful to a large number of situa‐ tions, and are likely to solve most issues that arise. Version Compatibility One of the most common issues that Salt Cloud users run into is import errors. These are often caused by version compatibility issues with Salt. Salt 0.16.x works with Salt Cloud 0.8.9 or greater. Salt 0.17.x requires Salt Cloud 0.8.11. Releases after 0.17.x (0.18 or greater) should not encounter issues as Salt Cloud has been merged into Salt itself. Debug Mode Frequently, running Salt Cloud in debug mode will reveal information about a deployment which would otherwise not be obvious: salt-cloud -p myprofile myinstance -l debug Keep in mind that a number of messages will appear that look at first like errors, but are in fact intended to give developers factual information to assist in debugging. A number of messages that appear will be for cloud providers that you do not have configured; in these cases, the message usually is intended to confirm that they are not configured. Salt Bootstrap By default, Salt Cloud uses the Salt Bootstrap script to provision instances: This script is packaged with Salt Cloud, but may be updated without updating the Salt package: salt-cloud -u The Bootstrap Log If the default deploy script was used, there should be a file in the /tmp/ directory called bootstrap-salt.log. This file contains the full output from the deployment, includ‐ ing any errors that may have occurred. Keeping Temp Files Salt Cloud uploads minion-specific files to instances once they are available via SSH, and then executes a deploy script to put them into the correct place and install Salt. The --keep-tmp option will instruct Salt Cloud not to remove those files when finished with them, so that the user may inspect them for problems: salt-cloud -p myprofile myinstance --keep-tmp By default, Salt Cloud will create a directory on the target instance called /tmp/.salt‐ cloud/. This directory should be owned by the user that is to execute the deploy script, and should have permissions of 0700. Most cloud hosts are configured to use root as the default initial user for deployment, and as such, this directory and all files in it should be owned by the root user. The /tmp/.saltcloud/ directory should the following files: · A deploy.sh script. This script should have permissions of 0755. · A .pem and .pub key named after the minion. The .pem file should have permissions of 0600. Ensure that the .pem and .pub files have been properly copied to the /etc/salt/pki/minion/ directory. · A file called minion. This file should have been copied to the /etc/salt/ directory. · Optionally, a file called grains. This file, if present, should have been copied to the /etc/salt/ directory. Unprivileged Primary Users Some cloud hosts, most notably EC2, are configured with a different primary user. Some common examples are ec2-user, ubuntu, fedora, and bitnami. In these cases, the /tmp/.saltcloud/ directory and all files in it should be owned by this user. Some cloud hosts, such as EC2, are configured to not require these users to provide a password when using the sudo command. Because it is more secure to require sudo users to provide a password, other hosts are configured that way. If this instance is required to provide a password, it needs to be configured in Salt Cloud. A password for sudo to use may be added to either the provider configuration or the profile configuration: sudo_password: mypassword /tmp/ is Mounted as noexec It is more secure to mount the /tmp/ directory with a noexec option. This is uncommon on most cloud hosts, but very common in private environments. To see if the /tmp/ directory is mounted this way, run the following command: mount | grep tmp The if the output of this command includes a line that looks like this, then the /tmp/ directory is mounted as noexec: tmpfs on /tmp type tmpfs (rw,noexec) If this is the case, then the deploy_command will need to be changed in order to run the deploy script through the sh command, rather than trying to execute it directly. This may be specified in either the provider or the profile config: deploy_command: sh /tmp/.saltcloud/deploy.sh Please note that by default, Salt Cloud will place its files in a directory called /tmp/.saltcloud/. This may be also be changed in the provider or profile configuration: tmp_dir: /tmp/.saltcloud/ If this directory is changed, then the deploy_command need to be changed in order to reflect the tmp_dir configuration. Executing the Deploy Script Manually If all of the files needed for deployment were successfully uploaded to the correct loca‐ tions, and contain the correct permissions and ownerships, the deploy script may be exe‐ cuted manually in order to check for other issues: cd /tmp/.saltcloud/ ./deploy.sh Extending Salt Cloud Writing Cloud Driver Modules Salt Cloud runs on a module system similar to the main Salt project. The modules inside saltcloud exist in the salt/cloud/clouds directory of the salt source. There are two basic types of cloud modules. If a cloud host is supported by libcloud, then using it is the fastest route to getting a module written. The Apache Libcloud project is located at: http://libcloud.apache.org/ Not every cloud host is supported by libcloud. Additionally, not every feature in a sup‐ ported cloud host is necessarily supported by libcloud. In either of these cases, a module can be created which does not rely on libcloud. All Driver Modules The following functions are required by all driver modules, whether or not they are based on libcloud. The __virtual__() Function This function determines whether or not to make this cloud module available upon execu‐ tion. Most often, it uses get_configured_provider() to determine if the necessary configu‐ ration has been set up. It may also check for necessary imports, to decide whether to load the module. In most cases, it will return a True or False value. If the name of the driver used does not match the filename, then that name should be returned instead of True. An example of this may be seen in the Azure module: https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/msazure.py The get_configured_provider() Function This function uses config.is_provider_configured() to determine wither all required infor‐ mation for this driver has been configured. The last value in the list of required set‐ tings should be followed by a comma. Libcloud Based Modules Writing a cloud module based on libcloud has two major advantages. First of all, much of the work has already been done by the libcloud project. Second, most of the functions nec‐ essary to Salt have already been added to the Salt Cloud project. The create() Function The most important function that does need to be manually written is the create() func‐ tion. This is what is used to request a virtual machine to be created by the cloud host, wait for it to become available, and then (optionally) log in and install Salt on it. A good example to follow for writing a cloud driver module based on libcloud is the module provided for Linode: https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/linode.py The basic flow of a create() function is as follows: · Send a request to the cloud host to create a virtual machine. · Wait for the virtual machine to become available. · Generate kwargs to be used to deploy Salt. · Log into the virtual machine and deploy Salt. · Return a data structure that describes the newly-created virtual machine. At various points throughout this function, events may be fired on the Salt event bus. Four of these events, which are described below, are required. Other events may be added by the user, where appropriate. When the create() function is called, it is passed a data structure called vm_. This dict contains a composite of information describing the virtual machine to be created. A dict called __opts__ is also provided by Salt, which contains the options used to run Salt Cloud, as well as a set of configuration and environment variables. The first thing the create() function must do is fire an event stating that it has started the create process. This event is tagged salt/cloud/<vm name>/creating. The payload con‐ tains the names of the VM, profile, and provider. A set of kwargs is then usually created, to describe the parameters required by the cloud host to request the virtual machine. An event is then fired to state that a virtual machine is about to be requested. It is tagged as salt/cloud/<vm name>/requesting. The payload contains most or all of the parame‐ ters that will be sent to the cloud host. Any private information (such as passwords) should not be sent in the event. After a request is made, a set of deploy kwargs will be generated. These will be used to install Salt on the target machine. Windows options are supported at this point, and should be generated, even if the cloud host does not currently support Windows. This will save time in the future if the host does eventually decide to support Windows. An event is then fired to state that the deploy process is about to begin. This event is tagged salt/cloud/<vm name>/deploying. The payload for the event will contain a set of deploy kwargs, useful for debugging purposed. Any private data, including passwords and keys (including public keys) should be stripped from the deploy kwargs before the event is fired. If any Windows options have been passed in, the salt.utils.cloud.deploy_windows() function will be called. Otherwise, it will be assumed that the target is a Linux or Unix machine, and the salt.utils.cloud.deploy_script() will be called. Both of these functions will wait for the target machine to become available, then the necessary port to log in, then a successful login that can be used to install Salt. Minion configuration and keys will then be uploaded to a temporary directory on the target by the appropriate function. On a Windows target, the Windows Minion Installer will be run in silent mode. On a Linux/Unix target, a deploy script (bootstrap-salt.sh, by default) will be run, which will auto-detect the operating system, and install Salt using its native package manager. These do not need to be handled by the developer in the cloud module. The salt.utils.cloud.validate_windows_cred() function has been extended to take the number of retries and retry_delay parameters in case a specific cloud host has a delay between providing the Windows credentials and the credentials being available for use. In their create() function, or as a a sub-function called during the creation process, developers should use the win_deploy_auth_retries and win_deploy_auth_retry_delay parameters from the provider configuration to allow the end-user the ability to customize the number of tries and delay between tries for their particular host. After the appropriate deploy function completes, a final event is fired which describes the virtual machine that has just been created. This event is tagged salt/cloud/<vm name>/created. The payload contains the names of the VM, profile, and provider. Finally, a dict (queried from the provider) which describes the new virtual machine is returned to the user. Because this data is not fired on the event bus it can, and should, return any passwords that were returned by the cloud host. In some cases (for example, Rackspace), this is the only time that the password can be queried by the user; post-cre‐ ation queries may not contain password information (depending upon the host). The libcloudfuncs Functions A number of other functions are required for all cloud hosts. However, with libcloud-based modules, these are all provided for free by the libcloudfuncs library. The following two lines set up the imports: from salt.cloud.libcloudfuncs import * # pylint: disable=W0614,W0401 from salt.utils import namespaced_function And then a series of declarations will make the necessary functions available within the cloud module. get_size = namespaced_function(get_size, globals()) get_image = namespaced_function(get_image, globals()) avail_locations = namespaced_function(avail_locations, globals()) avail_images = namespaced_function(avail_images, globals()) avail_sizes = namespaced_function(avail_sizes, globals()) script = namespaced_function(script, globals()) destroy = namespaced_function(destroy, globals()) list_nodes = namespaced_function(list_nodes, globals()) list_nodes_full = namespaced_function(list_nodes_full, globals()) list_nodes_select = namespaced_function(list_nodes_select, globals()) show_instance = namespaced_function(show_instance, globals()) If necessary, these functions may be replaced by removing the appropriate declaration line, and then adding the function as normal. These functions are required for all cloud modules, and are described in detail in the next section. Non-Libcloud Based Modules In some cases, using libcloud is not an option. This may be because libcloud has not yet included the necessary driver itself, or it may be that the driver that is included with libcloud does not contain all of the necessary features required by the developer. When this is the case, some or all of the functions in libcloudfuncs may be replaced. If they are all replaced, the libcloud imports should be absent from the Salt Cloud module. A good example of a non-libcloud driver is the DigitalOcean driver: https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/digital_ocean.py The create() Function The create() function must be created as described in the libcloud-based module documenta‐ tion. The get_size() Function This function is only necessary for libcloud-based modules, and does not need to exist otherwise. The get_image() Function This function is only necessary for libcloud-based modules, and does not need to exist otherwise. The avail_locations() Function This function returns a list of locations available, if the cloud host uses multiple data centers. It is not necessary if the cloud host uses only one data center. It is normally called using the --list-locations option. salt-cloud --list-locations my-cloud-provider The avail_images() Function This function returns a list of images available for this cloud provider. There are not currently any known cloud providers that do not provide this functionality, though they may refer to images by a different name (for example, "templates"). It is normally called using the --list-images option. salt-cloud --list-images my-cloud-provider The avail_sizes() Function This function returns a list of sizes available for this cloud provider. Generally, this refers to a combination of RAM, CPU, and/or disk space. This functionality may not be present on some cloud providers. For example, the Parallels module breaks down RAM, CPU, and disk space into separate options, whereas in other providers, these options are baked into the image. It is normally called using the --list-sizes option. salt-cloud --list-sizes my-cloud-provider The script() Function This function builds the deploy script to be used on the remote machine. It is likely to be moved into the salt.utils.cloud library in the near future, as it is very generic and can usually be copied wholesale from another module. An excellent example is in the Azure driver. The destroy() Function This function irreversibly destroys a virtual machine on the cloud provider. Before doing so, it should fire an event on the Salt event bus. The tag for this event is salt/cloud/<vm name>/destroying. Once the virtual machine has been destroyed, another event is fired. The tag for that event is salt/cloud/<vm name>/destroyed. This function is normally called with the -d options: salt-cloud -d myinstance The list_nodes() Function This function returns a list of nodes available on this cloud provider, using the follow‐ ing fields: · id (str) · image (str) · size (str) · state (str) · private_ips (list) · public_ips (list) No other fields should be returned in this function, and all of these fields should be returned, even if empty. The private_ips and public_ips fields should always be of a list type, even if empty, and the other fields should always be of a str type. This function is normally called with the -Q option: salt-cloud -Q The list_nodes_full() Function All information available about all nodes should be returned in this function. The fields in the list_nodes() function should also be returned, even if they would not normally be provided by the cloud provider. This is because some functions both within Salt and 3rd party will break if an expected field is not present. This function is normally called with the -F option: salt-cloud -F The list_nodes_select() Function This function returns only the fields specified in the query.selection option in /etc/salt/cloud. Because this function is so generic, all of the heavy lifting has been moved into the salt.utils.cloud library. A function to call list_nodes_select() still needs to be present. In general, the follow‐ ing code can be used as-is: def list_nodes_select(call=None): ''' Return a list of the VMs that are on the provider, with select fields ''' return salt.utils.cloud.list_nodes_select( list_nodes_full('function'), __opts__['query.selection'], call, ) However, depending on the cloud provider, additional variables may be required. For instance, some modules use a conn object, or may need to pass other options into list_nodes_full(). In this case, be sure to update the function appropriately: def list_nodes_select(conn=None, call=None): ''' Return a list of the VMs that are on the provider, with select fields ''' if not conn: conn = get_conn() # pylint: disable=E0602 return salt.utils.cloud.list_nodes_select( list_nodes_full(conn, 'function'), __opts__['query.selection'], call, ) This function is normally called with the -S option: salt-cloud -S The show_instance() Function This function is used to display all of the information about a single node that is avail‐ able from the cloud provider. The simplest way to provide this is usually to call list_nodes_full(), and return just the data for the requested node. It is normally called as an action: salt-cloud -a show_instance myinstance Actions and Functions Extra functionality may be added to a cloud provider in the form of an --action or a --function. Actions are performed against a cloud instance/virtual machine, and functions are performed against a cloud provider. Actions Actions are calls that are performed against a specific instance or virtual machine. The show_instance action should be available in all cloud modules. Actions are normally called with the -a option: salt-cloud -a show_instance myinstance Actions must accept a name as a first argument, may optionally support any number of kwargs as appropriate, and must accept an argument of call, with a default of None. Before performing any other work, an action should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic action looks like: def show_instance(name, call=None): ''' Show the details from EC2 concerning an AMI ''' if call != 'action': raise SaltCloudSystemExit( 'The show_instance action must be called with -a or --action.' ) return _get_node(name) Please note that generic kwargs, if used, are passed through to actions as kwargs and not **kwargs. An example of this is seen in the Functions section. Functions Functions are called that are performed against a specific cloud provider. An optional function that is often useful is show_image, which describes an image in detail. Functions are normally called with the -f option: salt-cloud -f show_image my-cloud-provider image='Ubuntu 13.10 64-bit' A function may accept any number of kwargs as appropriate, and must accept an argument of call with a default of None. Before performing any other work, a function should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic function looks like: def show_image(kwargs, call=None): ''' Show the details from EC2 concerning an AMI ''' if call != 'function': raise SaltCloudSystemExit( 'The show_image action must be called with -f or --function.' ) params = {'ImageId.1': kwargs['image'], 'Action': 'DescribeImages'} result = query(params) log.info(result) return result Take note that generic kwargs are passed through to functions as kwargs and not **kwargs. OS Support for Cloud VMs Salt Cloud works primarily by executing a script on the virtual machines as soon as they become available. The script that is executed is referenced in the cloud profile as the script. In older versions, this was the os argument. This was changed in 0.8.2. A number of legacy scripts exist in the deploy directory in the saltcloud source tree. The preferred method is currently to use the salt-bootstrap script. A stable version is included with each release tarball starting with 0.8.4. The most updated version can be found at: https://github.com/saltstack/salt-bootstrap If you do not specify a script argument, this script will be used at the default. If the Salt Bootstrap script does not meet your needs, you may write your own. The script should be written in bash and is a Jinja template. Deploy scripts need to execute a number of functions to do a complete salt setup. These functions include: 1. Install the salt minion. If this can be done via system packages this method is HIGHLY preferred. 2. Add the salt minion keys before the minion is started for the first time. The minion keys are available as strings that can be copied into place in the Jinja template under the dict named "vm". 3. Start the salt-minion daemon and enable it at startup time. 4. Set up the minion configuration file from the "minion" data available in the Jinja tem‐ plate. A good, well commented, example of this process is the Fedora deployment script: https://github.com/saltstack/salt-cloud/blob/master/saltcloud/deploy/Fedora.sh A number of legacy deploy scripts are included with the release tarball. None of them are as functional or complete as Salt Bootstrap, and are still included for academic purposes. Other Generic Deploy Scripts If you want to be assured of always using the latest Salt Bootstrap script, there are a few generic templates available in the deploy directory of your saltcloud source tree: curl-bootstrap curl-bootstrap-git python-bootstrap wget-bootstrap wget-bootstrap-git These are example scripts which were designed to be customized, adapted, and refit to meet your needs. One important use of them is to pass options to the salt-bootstrap script, such as updating to specific git tags. Post-Deploy Commands Once a minion has been deployed, it has the option to run a salt command. Normally, this would be the state.highstate command, which would finish provisioning the VM. Another com‐ mon option is state.sls, or for just testing, test.ping. This is configured in the main cloud config file: start_action: state.highstate This is currently considered to be experimental functionality, and may not work well with all cloud hosts. If you experience problems with Salt Cloud hanging after Salt is deployed, consider using Startup States instead: http://docs.saltstack.com/ref/states/startup.html Skipping the Deploy Script For whatever reason, you may want to skip the deploy script altogether. This results in a VM being spun up much faster, with absolutely no configuration. This can be set from the command line: salt-cloud --no-deploy -p micro_aws my_instance Or it can be set from the main cloud config file: deploy: False Or it can be set from the provider's configuration: RACKSPACE.user: example_user RACKSPACE.apikey: 123984bjjas87034 RACKSPACE.deploy: False Or even on the VM's profile settings: ubuntu_aws: provider: my-ec2-config image: ami-7e2da54e size: t1.micro deploy: False The default for deploy is True. In the profile, you may also set the script option to None: script: None This is the slowest option, since it still uploads the None deploy script and executes it. Updating Salt Bootstrap Salt Bootstrap can be updated automatically with salt-cloud: salt-cloud -u salt-cloud --update-bootstrap Bear in mind that this updates to the latest stable version from: https://bootstrap.saltstack.com/stable/bootstrap-salt.sh To update Salt Bootstrap script to the develop version, run the following command on the Salt minion host with salt-cloud installed: salt-call config.gather_bootstrap_script 'https://bootstrap.saltstack.com/develop/bootstra ↲ p-salt.sh' Or just download the file manually: curl -L 'https://bootstrap.saltstack.com/develop' > /etc/salt/cloud.deploy.d/bootstrap-sal ↲ t.sh Keeping /tmp/ Files When Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt-boot‐ strap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the --keep-tmp option can be added: salt-cloud -p myprofile mymachine --keep-tmp For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable). Deploy Script Arguments Custom deploy scripts are unlikely to need custom arguments to be passed to them, but salt-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script: aws-amazon: provider: my-ec2-config image: ami-1624987f size: t1.micro ssh_username: ec2-user script: bootstrap-salt script_args: -c /tmp/ This has also been tested to work with pipes, if needed: script_args: | head Using Salt Cloud from Salt Using the Salt Modules for Cloud In addition to the salt-cloud command, Salt Cloud can be called from Salt, in a variety of different ways. Most users will be interested in either the execution module or the state module, but it is also possible to call Salt Cloud as a runner. Because the actual work will be performed on a remote minion, the normal Salt Cloud con‐ figuration must exist on any target minion that needs to execute a Salt Cloud command. Because Salt Cloud now supports breaking out configuration into individual files, the con‐ figuration is easily managed using Salt's own file.managed state function. For example, the following directories allow this configuration to be managed easily: /etc/salt/cloud.providers.d/ /etc/salt/cloud.profiles.d/ Minion Keys Keep in mind that when creating minions, Salt Cloud will create public and private minion keys, upload them to the minion, and place the public key on the machine that created the minion. It will not attempt to place any public minion keys on the master, unless the min‐ ion which was used to create the instance is also the Salt Master. This is because grant‐ ing arbitrary minions access to modify keys on the master is a serious security risk, and must be avoided. Execution Module The cloud module is available to use from the command line. At the moment, almost every standard Salt Cloud feature is available to use. The following commands are available: list_images This command is designed to show images that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). Listing images requires a provider to be configured, and specified: salt myminion cloud.list_images my-cloud-provider list_sizes This command is designed to show sizes that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). This command is not available for all cloud providers; see the provider-specific documentation for details. Listing sizes requires a provider to be configured, and specified: salt myminion cloud.list_sizes my-cloud-provider list_locations This command is designed to show locations that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). This command is not available for all cloud providers; see the provider-specific documentation for details. Listing loca‐ tions requires a provider to be configured, and specified: salt myminion cloud.list_locations my-cloud-provider query This command is used to query all configured cloud providers, and display all instances associated with those accounts. By default, it will run a standard query, returning the following fields: id The name or ID of the instance, as used by the cloud provider. image The disk image that was used to create this instance. private_ips Any public IP addresses currently assigned to this instance. public_ips Any private IP addresses currently assigned to this instance. size The size of the instance; can refer to RAM, CPU(s), disk space, etc., depending on the cloud provider. state The running state of the instance; for example, running, stopped, pending, etc. This state is dependent upon the provider. This command may also be used to perform a full query or a select query, as described below. The following usages are available: salt myminion cloud.query salt myminion cloud.query list_nodes salt myminion cloud.query list_nodes_full full_query This command behaves like the query command, but lists all information concerning each instance as provided by the cloud provider, in addition to the fields returned by the query command. salt myminion cloud.full_query select_query This command behaves like the query command, but only returned select fields as defined in the /etc/salt/cloud configuration file. A sample configuration for this section of the file might look like: query.selection: - id - key_name This configuration would only return the id and key_name fields, for those cloud providers that support those two fields. This would be called using the following command: salt myminion cloud.select_query profile This command is used to create an instance using a profile that is configured on the tar‐ get minion. Please note that the profile must be configured before this command can be used with it. salt myminion cloud.profile ec2-centos64-x64 my-new-instance Please note that the execution module does not run in parallel mode. Using multiple min‐ ions to create instances can effectively perform parallel instance creation. create This command is similar to the profile command, in that it is used to create a new instance. However, it does not require a profile to be pre-configured. Instead, all of the options that are normally configured in a profile are passed directly to Salt Cloud to create the instance: salt myminion cloud.create my-ec2-config my-new-instance \ image=ami-1624987f size='t1.micro' ssh_username=ec2-user \ securitygroup=default delvol_on_destroy=True Please note that the execution module does not run in parallel mode. Using multiple min‐ ions to create instances can effectively perform parallel instance creation. destroy This command is used to destroy an instance or instances. This command will search all configured providers and remove any instance(s) which matches the name(s) passed in here. The results of this command are non-reversable and should be used with caution. salt myminion cloud.destroy myinstance salt myminion cloud.destroy myinstance1,myinstance2 action This command implements both the action and the function commands used in the standard salt-cloud command. If one of the standard action commands is used, an instance name must be provided. If one of the standard function commands is used, a provider configuration must be named. salt myminion cloud.action start instance=myinstance salt myminion cloud.action show_image provider=my-ec2-config \ image=ami-1624987f The actions available are largely dependent upon the module for the specific cloud provider. The following actions are available for all cloud providers: list_nodes This is a direct call to the query function as described above, but is only per‐ formed against a single cloud provider. A provider configuration must be included. list_nodes_select This is a direct call to the full_query function as described above, but is only performed against a single cloud provider. A provider configuration must be included. list_nodes_select This is a direct call to the select_query function as described above, but is only performed against a single cloud provider. A provider configuration must be included. show_instance This is a thin wrapper around list_nodes, which returns the full information about a single instance. An instance name must be provided. State Module A subset of the execution module is available through the cloud state module. Not all functions are currently included, because there is currently insufficient code for them to perform statefully. For example, a command to create an instance may be issued with a series of options, but those options cannot currently be statefully managed. Additional states to manage these options will be released at a later time. cloud.present This state will ensure that an instance is present inside a particular cloud provider. Any option that is normally specified in the cloud.create execution module and function may be declared here, but only the actual presence of the instance will be managed statefully. my-instance-name: cloud.present: - provider: my-ec2-config - image: ami-1624987f - size: 't1.micro' - ssh_username: ec2-user - securitygroup: default - delvol_on_destroy: True cloud.profile This state will ensure that an instance is present inside a particular cloud provider. This function calls the cloud.profile execution module and function, but as with cloud.present, only the actual presence of the instance will be managed statefully. my-instance-name: cloud.profile: - profile: ec2-centos64-x64 cloud.absent This state will ensure that an instance (identified by name) does not exist in any of the cloud providers configured on the target minion. Please note that this state is non-reversable and may be considered especially destructive when issued as a cloud state. my-instance-name: cloud.absent Runner Module The cloud runner module is executed on the master, and performs actions using the configu‐ ration and Salt modules on the master itself. This means that any public minion keys will also be properly accepted by the master. Using the functions in the runner module is no different than using those in the execution module, outside of the behavior described in the above paragraph. The following functions are available inside the runner: · list_images · list_sizes · list_locations · query · full_query · select_query · profile · destroy · action Outside of the standard usage of salt-run itself, commands are executed as usual: salt-run cloud.profile ec2-centos64-x86_64 my-instance-name CloudClient The execution, state, and runner modules ultimately all use the CloudClient library that ships with Salt. To use the CloudClient library locally (either on the master or a min‐ ion), create a client object and issue a command against it: import salt.cloud import pprint client = salt.cloud.CloudClient('/etc/salt/cloud') nodes = client.query() pprint.pprint(nodes) Reactor Examples of using the reactor with Salt Cloud are available in the ec2-autoscale-reactor and salt-cloud-reactor formulas. Feature Comparison Feature Matrix A number of features are available in most cloud hosts, but not all are available every‐ where. This may be because the feature isn't supported by the cloud host itself, or it may only be that the feature has not yet been added to Salt Cloud. In a handful of cases, it is because the feature does not make sense for a particular cloud provider (Saltify, for instance). This matrix shows which features are available in which cloud hosts, as far as Salt Cloud is concerned. This is not a comprehensive list of all features available in all cloud hosts, and should not be used to make business decisions concerning choosing a cloud host. In most cases, adding support for a feature to Salt Cloud requires only a little effort. Legacy Drivers Both AWS and Rackspace are listed as "Legacy". This is because those drivers have been replaced by other drivers, which are generally the preferred method for working with those hosts. The EC2 driver should be used instead of the AWS driver, when possible. The OpenStack driver should be used instead of the Rackspace driver, unless the user is dealing with instances in "the old cloud" in Rackspace. Note for Developers When adding new features to a particular cloud host, please make sure to add the feature to this table. Additionally, if you notice a feature that is not properly listed here, pull requests to fix them is appreciated. Standard Features These are features that are available for almost every cloud host. ┌────────┬──────────┬────────┬───────┬─────┬────────┬────────┬────────┬───────┬────────┬───────────┬ ↲ ─────────┬───────┬───────┬────────┐ │ │ AWS │ Cloud‐ │ Digi‐ │ EC2 │ GoGrid │ JoyEnt │ Linode │ Open‐ │ Paral‐ │ Rackspace │ ↲ Saltify │ Soft‐ │ Soft‐ │ Aliyun │ │ │ (Legacy) │ Stack │ tal │ │ │ │ │ Stack │ lels │ (Legacy) │ ↲ │ layer │ layer │ │ │ │ │ │ Ocean │ │ │ │ │ │ │ │ ↲ │ │ Hard‐ │ │ │ │ │ │ │ │ │ │ │ │ │ │ ↲ │ │ ware │ │ ├────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼───────────┼ ↲ ─────────┼───────┼───────┼────────┤ │Query │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ ↲ │ Yes │ Yes │ Yes │ ├────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼───────────┼ ↲ ─────────┼───────┼───────┼────────┤ │Full │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ ↲ │ Yes │ Yes │ Yes │ │Query │ │ │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ ├────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼───────────┼ ↲ ─────────┼───────┼───────┼────────┤ │Selec‐ │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ ↲ │ Yes │ Yes │ Yes │ │tive │ │ │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ │Query │ │ │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ ├────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼───────────┼ ↲ ─────────┼───────┼───────┼────────┤ │List │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ ↲ │ Yes │ Yes │ Yes │ │Sizes │ │ │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ ├────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼───────────┼ ↲ ─────────┼───────┼───────┼────────┤ │List │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ ↲ │ Yes │ Yes │ Yes │ │Images │ │ │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ ├────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼───────────┼ ↲ ─────────┼───────┼───────┼────────┤ │List │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ ↲ │ Yes │ Yes │ Yes │ │Loca‐ │ │ │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ │tions │ │ │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ ├────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼───────────┼ ↲ ─────────┼───────┼───────┼────────┤ │create │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ ↲ Yes │ Yes │ Yes │ Yes │ ├────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼───────────┼ ↲ ─────────┼───────┼───────┼────────┤ │destroy │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ ↲ │ Yes │ Yes │ Yes │ └────────┴──────────┴────────┴───────┴─────┴────────┴────────┴────────┴───────┴────────┴───────────┴ ↲ ─────────┴───────┴───────┴────────┘ Actions These are features that are performed on a specific instance, and require an instance name to be passed in. For example: # salt-cloud -a attach_volume ami.example.com ┌───────────────────────┬──────────┬────────┬───────┬─────┬────────┬────────┬────────┬───────┬────── ↲ ──┬───────────┬─────────┬───────┬───────┬────────┐ │Actions │ AWS │ Cloud‐ │ Digi‐ │ EC2 │ GoGrid │ JoyEnt │ Linode │ Open‐ │ Paral ↲ ‐ │ Rackspace │ Saltify │ Soft‐ │ Soft‐ │ Aliyun │ │ │ (Legacy) │ Stack │ tal │ │ │ │ │ Stack │ lels ↲ │ (Legacy) │ │ layer │ layer │ │ │ │ │ │ Ocean │ │ │ │ │ │ ↲ │ │ │ │ Hard‐ │ │ │ │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ ware │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │attach_vol‐ │ │ │ │ Yes │ │ │ │ │ ↲ │ │ │ │ │ │ │ume │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │cre‐ │ Yes │ │ │ Yes │ │ │ │ │ ↲ │ │ │ │ │ │ │ate_attach_vol‐ │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ │ │umes │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │del_tags │ Yes │ │ │ Yes │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │delvol_on_destroy │ │ │ │ Yes │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │detach_volume │ │ │ │ Yes │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │disable_term_pro‐ │ Yes │ │ │ Yes │ │ │ │ │ ↲ │ │ │ │ │ │ │tect │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │enable_term_pro‐ │ Yes │ │ │ Yes │ │ │ │ │ ↲ │ │ │ │ │ │ │tect │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │get_tags │ Yes │ │ │ Yes │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │keep‐ │ │ │ │ Yes │ │ │ │ │ ↲ │ │ │ │ │ │ │vol_on_destroy │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │list_keypairs │ │ │ Yes │ │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │rename │ Yes │ │ │ Yes │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │set_tags │ Yes │ │ │ Yes │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │show_delvol_on_destroy │ │ │ │ Yes │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │show_instance │ │ │ Yes │ Yes │ │ │ Yes │ │ Yes ↲ │ │ │ Yes │ Yes │ Yes │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │show_term_protect │ │ │ │ Yes │ │ │ │ │ ↲ │ │ │ │ │ │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │start │ Yes │ │ │ Yes │ │ Yes │ Yes │ │ Yes ↲ │ │ │ │ │ Yes │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │stop │ Yes │ │ │ Yes │ │ Yes │ Yes │ │ Yes ↲ │ │ │ │ │ Yes │ ├───────────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────── ↲ ──┼───────────┼─────────┼───────┼───────┼────────┤ │take_action │ │ │ │ │ │ Yes │ │ │ ↲ │ │ │ │ │ │ └───────────────────────┴──────────┴────────┴───────┴─────┴────────┴────────┴────────┴───────┴────── ↲ ──┴───────────┴─────────┴───────┴───────┴────────┘ Functions These are features that are performed against a specific cloud provider, and require the name of the provider to be passed in. For example: # salt-cloud -f list_images my_digitalocean ┌──────────────────┬──────────┬────────┬───────┬─────┬────────┬────────┬────────┬───────┬────────┬── ↲ ─────────┬─────────┬───────┬───────┬────────┐ │Func‐ │ AWS │ Cloud‐ │ Digi‐ │ EC2 │ GoGrid │ JoyEnt │ Linode │ Open‐ │ Paral‐ │ R ↲ ackspace │ Saltify │ Soft‐ │ Soft‐ │ Aliyun │ │tions │ (Legacy) │ Stack │ tal │ │ │ │ │ Stack │ lels │ ( ↲ Legacy) │ │ layer │ layer │ │ │ │ │ │ Ocean │ │ │ │ │ │ │ ↲ │ │ │ Hard‐ │ │ │ │ │ │ │ │ │ │ │ │ │ ↲ │ │ │ ware │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │block_device_map‐ │ Yes │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ │pings │ │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │create_keypair │ │ │ │ Yes │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │create_volume │ │ │ │ Yes │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │delete_key │ │ │ │ │ │ Yes │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │delete_keypair │ │ │ │ Yes │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │delete_volume │ │ │ │ Yes │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │get_image │ │ │ Yes │ │ │ Yes │ │ │ Yes │ ↲ │ │ │ │ Yes │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │get_ip │ │ Yes │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │get_key │ │ Yes │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │get_keyid │ │ │ Yes │ │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │get_keypair │ │ Yes │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │get_networkid │ │ Yes │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │get_node │ │ │ │ │ │ Yes │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │get_password │ │ Yes │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │get_size │ │ │ Yes │ │ │ Yes │ │ │ │ ↲ │ │ │ │ Yes │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │get_spot_config │ │ │ │ Yes │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │get_subnetid │ │ │ │ Yes │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │iam_profile │ Yes │ │ │ Yes │ │ │ │ │ │ ↲ │ │ │ │ Yes │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │import_key │ │ │ │ │ │ Yes │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │key_list │ │ │ │ │ │ Yes │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │keyname │ Yes │ │ │ Yes │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │list_availabil‐ │ │ │ │ Yes │ │ │ │ │ │ ↲ │ │ │ │ Yes │ │ity_zones │ │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │list_cus‐ │ │ │ │ │ │ │ │ │ │ ↲ │ │ Yes │ │ │ │tom_images │ │ │ │ │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │list_keys │ │ │ │ │ │ Yes │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │list_nodes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Y ↲ es │ Yes │ Yes │ Yes │ Yes │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │list_nodes_full │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Y ↲ es │ Yes │ Yes │ Yes │ Yes │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │list_nodes_select │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Yes │ Y ↲ es │ Yes │ Yes │ Yes │ Yes │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │list_vlans │ │ │ │ │ │ │ │ │ │ ↲ │ │ Yes │ Yes │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │rackconnect │ │ │ │ │ │ │ │ Yes │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │reboot │ │ │ │ Yes │ │ Yes │ │ │ │ ↲ │ │ │ │ Yes │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │reformat_node │ │ │ │ │ │ Yes │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │securitygroup │ Yes │ │ │ Yes │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │securitygroupid │ │ │ │ Yes │ │ │ │ │ │ ↲ │ │ │ │ Yes │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │show_image │ │ │ │ Yes │ │ │ │ │ Yes │ ↲ │ │ │ │ Yes │ └──────────────────┴──────────┴────────┴───────┴─────┴────────┴────────┴────────┴───────┴────────┴── ↲ ─────────┴─────────┴───────┴───────┴────────┘ │show_key │ │ │ │ │ │ Yes │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │show_keypair │ │ │ Yes │ Yes │ │ │ │ │ │ ↲ │ │ │ │ │ ├──────────────────┼──────────┼────────┼───────┼─────┼────────┼────────┼────────┼───────┼────────┼── ↲ ─────────┼─────────┼───────┼───────┼────────┤ │show_volume │ │ │ │ Yes │ │ │ │ │ │ ↲ │ │ │ │ Yes │ └──────────────────┴──────────┴────────┴───────┴─────┴────────┴────────┴────────┴───────┴────────┴── ↲ ─────────┴─────────┴───────┴───────┴────────┘ Tutorials Salt Cloud Quickstart Salt Cloud is built-in to Salt, and the easiest way to run Salt Cloud is directly from your Salt Master. On most platforms you can install the salt-cloud package from the same repo that you used to install Salt. This quickstart walks you through the basic steps of setting up a cloud host and defining some virtual machines to create. NOTE: Salt Cloud has its own process and does not rely on the Salt Master, so it can be installed on a standalone minion instead of your Salt Master. Define a Provider The first step is to add the credentials for your cloud host. Credentials and other set‐ tings provided by the cloud host are stored in provider configuration files. Provider con‐ figurations contain the details needed to connect to a cloud host such as EC2, GCE, Rackspace, etc., and any global options that you want set on your cloud minions (such as the location of your Salt Master). On your Salt Master, browse to /etc/salt/cloud.providers.d/ and create a file called <provider>.conf, replacing <provider> with ec2, softlayer, and so on. The name helps you identify the contents, and is not important as long as the file ends in .conf. Next, browse to the Provider specifics and add any required settings for your cloud host to this file. Here is an example for Amazon EC2: my-ec2: driver: ec2 # Set the EC2 access credentials (see below) # id: 'HJGRYCILJLKJYG' key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' # Make sure this key is owned by root with permissions 0400. # private_key: /etc/salt/my_test_key.pem keyname: my_test_key securitygroup: default # Optional: Set up the location of the Salt Master # minion: master: saltmaster.example.com The required configuration varies between cloud hosts so make sure you read the provider specifics. List Cloud Provider Options You can now query the cloud provider you configured for available locations, images, and sizes. This information is used when you set up VM profiles. salt-cloud --list-locations <provider_name> # my-ec2 in the previous example salt-cloud --list-images <provider_name> salt-cloud --list-sizes <provider_name> Replace <provider_name> with the name of the provider configuration you defined. Create VM Profiles On your Salt Master, browse to /etc/salt/cloud.profiles.d/ and create a file called <pro‐ file>.conf, replacing <profile> with ec2, softlayer, and so on. The file must end in .conf. You can now add any custom profiles you'd like to define to this file. Here are a few examples: micro_ec2: provider: my-ec2 image: ami-d514f291 size: t1.micro medium_ec2: provider: my-ec2 image: ami-d514f291 size: m3.medium large_ec2: provider: my-ec2 image: ami-d514f291 size: m3.large Notice that the provider in our profile matches the provider name that we defined? That is how Salt Cloud knows how to connect to to a cloud host to create a VM with these attributes. Create VMs VMs are created by calling salt-cloud with the following options: salt-cloud -p <profile> <name1> <name2> ... For example: salt-cloud -p micro_ec2 minion1 minion2 Destroy VMs Add a -d and the minion name you provided to destroy: salt-cloud -d minion1 minion2 Query VMs You can view details about the VMs you've created using --query: salt-cloud --query Cloud Map Now that you know how to create and destoy individual VMs, next you should learn how to use a cloud map to create a number of VMs at once. Cloud maps let you define a map of your infrastructure and quickly provision any number of VMs. On subsequent runs, any VMs that do not exist are created, and VMs that are already configured are left unmodified. See Cloud Map File. Using Salt Cloud with the Event Reactor One of the most powerful features of the Salt framework is the Event Reactor. As the Reactor was in development, Salt Cloud was regularly updated to take advantage of the Reactor upon completion. As such, various aspects of both the creation and destruction of instances with Salt Cloud fire events to the Salt Master, which can be used by the Event Reactor. Event Structure As of this writing, all events in Salt Cloud have a tag, which includes the ID of the instance being managed, and a payload which describes the task that is currently being handled. A Salt Cloud tag looks like: salt/cloud/<minion_id>/<task> For instance, the first event fired when creating an instance named web1 would look like: salt/cloud/web1/creating Assuming this instance is using the ec2-centos profile, which is in turn using the ec2-config provider, the payload for this tag would look like: {'name': 'web1', 'profile': 'ec2-centos', 'provider': 'ec2-config:ec2'} Available Events When an instance is created in Salt Cloud, whether by map, profile, or directly through an API, a minimum of five events are normally fired. More may be available, depending upon the cloud provider being used. Some of the common events are described below. salt/cloud/<minion_id>/creating This event states simply that the process to create an instance has begun. At this point in time, no actual work has begun. The payload for this event includes: name profile provider salt/cloud/<minion_id>/requesting Salt Cloud is about to make a request to the cloud provider to create an instance. At this point, all of the variables required to make the request have been gathered, and the pay‐ load of the event will reflect those variables which do not normally pose a security risk. What is returned here is dependent upon the cloud provider. Some common variables are: name image size location salt/cloud/<minion_id>/querying The instance has been successfully requested, but the necessary information to log into the instance (such as IP address) is not yet available. This event marks the beginning of the process to wait for this information. The payload for this event normally only includes the instance_id. salt/cloud/<minion_id>/waiting_for_ssh The information required to log into the instance has been retrieved, but the instance is not necessarily ready to be accessed. Following this event, Salt Cloud will wait for the IP address to respond to a ping, then wait for the specified port (usually 22) to respond to a connection, and on Linux systems, for SSH to become available. Salt Cloud will attempt to issue the date command on the remote system, as a means to check for availabil‐ ity. If no ssh_username has been specified, a list of usernames (starting with root) will be attempted. If one or more usernames was configured for ssh_username, they will be added to the beginning of the list, in order. The payload for this event normally only includes the ip_address. salt/cloud/<minion_id>/deploying The necessary port has been detected as available, and now Salt Cloud can log into the instance, upload any files used for deployment, and run the deploy script. Once the script has completed, Salt Cloud will log back into the instance and remove any remaining files. A number of variables are used to deploy instances, and the majority of these will be available in the payload. Any keys, passwords or other sensitive data will be scraped from the payload. Most of the variables returned will be related to the profile or provider config, and any default values that could have been changed in the profile or provider, but weren't. salt/cloud/<minion_id>/created The deploy sequence has completed, and the instance is now available, Salted, and ready for use. This event is the final task for Salt Cloud, before returning instance informa‐ tion to the user and exiting. The payload for this event contains little more than the initial creating event. This event is required in all cloud providers. Configuring the Event Reactor The Event Reactor is built into the Salt Master process, and as such is configured via the master configuration file. Normally this will be a YAML file located at /etc/salt/master. Additionally, master configuration items can be stored, in YAML format, inside the /etc/salt/master.d/ directory. These configuration items may be stored in either location; however, they may only be stored in one location. For organizational and security purposes, it may be best to create a single configuration file, which contains only Event Reactor configuration, at /etc/salt/master.d/reactor. The Event Reactor uses a top-level configuration item called reactor. This block contains a list of tags to be watched for, each of which also includes a list of sls files. For instance: reactor: - 'salt/minion/*/start': - '/srv/reactor/custom-reactor.sls' - 'salt/cloud/*/created': - '/srv/reactor/cloud-alert.sls' - 'salt/cloud/*/destroyed': - '/srv/reactor/cloud-destroy-alert.sls' The above configuration configures reactors for three different tags: one which is fired when a minion process has started and is available to receive commands, one which is fired when a cloud instance has been created, and one which is fired when a cloud instance is destroyed. Note that each tag contains a wildcard (*) in it. For each of these tags, this will nor‐ mally refer to a minion_id. This is not required of event tags, but is very common. Reactor SLS Files Reactor sls files should be placed in the /srv/reactor/ directory for consistency between environments, but this is not currently enforced by Salt. Reactor sls files follow a similar format to other sls files in Salt. By default they are written in YAML and can be templated using Jinja, but since they are processed through Salt's rendering system, any available renderer (JSON, Mako, Cheetah, etc.) can be used. As with other sls files, each stanza will start with a declaration ID, followed by the function to run, and then any arguments for that function. For example: # /srv/reactor/cloud-alert.sls new_instance_alert: cmd.pagerduty.create_event: - tgt: alertserver - kwarg: description: "New instance: {{ data['name'] }}" details: "New cloud instance created on {{ data['provider'] }}" service_key: 1626dead5ecafe46231e968eb1be29c4 profile: my-pagerduty-account When the Event Reactor receives an event notifying it that a new instance has been cre‐ ated, this sls will create a new incident in PagerDuty, using the configured PagerDuty account. The declaration ID in this example is new_instance_alert. The function called is cmd.pagerduty.create_event. The cmd portion of this function specifies that an execution module and function will be called, in this case, the pagerduty.create_event function. Because an execution module is specified, a target (tgt) must be specified on which to call the function. In this case, a minion called alertserver has been used. Any arguments passed through to the function are declared in the kwarg block. Example: Reactor-Based Highstate When Salt Cloud creates an instance, by default it will install the Salt Minion onto the instance, along with any specified minion configuration, and automatically accept that minion's keys on the master. One of the configuration options that can be specified is startup_states, which is commonly set to highstate. This will tell the minion to immedi‐ ately apply a highstate, as soon as it is able to do so. This can present a problem with some system images on some cloud hosts. For instance, Salt Cloud can be configured to log in as either the root user, or a user with sudo access. While some hosts commonly use images that lock out remote root access and require a user with sudo privileges to log in (notably EC2, with their ec2-user login), most cloud hosts fall back to root as the default login on all images, including for operating systems (such as Ubuntu) which normally disallow remote root login. For users of these operating systems, it is understandable that a highstate would include configuration to block remote root logins again. However, Salt Cloud may not have finished cleaning up its deployment files by the time the minion process has started, and kicked off a highstate run. Users have reported errors from Salt Cloud getting locked out while trying to clean up after itself. The goal of a startup state may be achieved using the Event Reactor. Because a minion fires an event when it is able to receive commands, this event can effectively be used inside the reactor system instead. The following will point the reactor system to the right sls file: reactor: - 'salt/cloud/*/created': - '/srv/reactor/startup_highstate.sls' And the following sls file will start a highstate run on the target minion: # /srv/reactor/startup_highstate.sls reactor_highstate: cmd.state.highstate: - tgt: {{ data['name'] }} Because this event will not be fired until Salt Cloud has cleaned up after itself, the highstate run will not step on Salt Cloud's toes. And because every file on the minion is configurable, including /etc/salt/minion, the startup_states can still be configured for future minion restarts, if desired.
NETAPI MODULES Writing netapi modules netapi modules, put simply, bind a port and start a service. They are purposefully open-ended and can be used to present a variety of external interfaces to Salt, and even present multiple interfaces at once. SEE ALSO: The full list of netapi modules Configuration All netapi configuration is done in the Salt master config and takes a form similar to the following: rest_cherrypy: port: 8000 debug: True ssl_crt: /etc/pki/tls/certs/localhost.crt ssl_key: /etc/pki/tls/certs/localhost.key The __virtual__ function Like all module types in Salt, netapi modules go through Salt's loader interface to deter‐ mine if they should be loaded into memory and then executed. The __virtual__ function in the module makes this determination and should return False or a string that will serve as the name of the module. If the module raises an ImportError or any other errors, it will not be loaded. The start function The start() function will be called for each netapi module that is loaded. This function should contain the server loop that actually starts the service. This is started in a mul‐ tiprocess. Inline documentation As with the rest of Salt, it is a best-practice to include liberal inline documentation in the form of a module docstring and docstrings on any classes, methods, and functions in your netapi module. Loader “magic” methods The loader makes the __opts__ data structure available to any function in a netapi module. Introduction to netapi modules netapi modules provide API-centric access to Salt. Usually externally-facing services such as REST or WebSockets, XMPP, XMLRPC, etc. In general netapi modules bind to a port and start a service. They are purposefully open-ended. A single module can be configured to run as well as multiple modules simulta‐ neously. netapi modules are enabled by adding configuration to your Salt Master config file and then starting the salt-api daemon. Check the docs for each module to see external require‐ ments and configuration settings. Communication with Salt and Salt satellite projects is done using Salt's own Python API. A list of available client interfaces is below. salt-api Prior to Salt's 2014.7.0 release, netapi modules lived in the separate sister projected salt-api. That project has been merged into the main Salt project. SEE ALSO: The full list of netapi modules Client interfaces Salt's client interfaces expose executing functions by crafting a dictionary of values that are mapped to function arguments. This allows calling functions simply by creating a data structure. (And this is exactly how much of Salt's own internals work!) class salt.netapi.NetapiClient(opts) Provide a uniform method of accessing the various client interfaces in Salt in the form of low-data data structures. For example: >>> client = NetapiClient(__opts__) >>> lowstate = {'client': 'local', 'tgt': '*', 'fun': 'test.ping', 'arg': ''} >>> client.run(lowstate) local(*args, **kwargs) Run execution modules synchronously See salt.client.LocalClient.cmd() for all available parameters. Sends a command from the master to the targeted minions. This is the same interface that Salt's own CLI uses. Note the arg and kwarg parameters are sent down to the minion(s) and the given function, fun, is called with those parameters. Returns Returns the result from the execution module local_async(*args, **kwargs) Run execution modules asynchronously Wraps salt.client.LocalClient.run_job(). Returns job ID local_batch(*args, **kwargs) Run execution modules against batches of minions New in version 0.8.4. Wraps salt.client.LocalClient.cmd_batch() Returns Returns the result from the exeuction module for each batch of returns runner(fun, timeout=None, **kwargs) Run runner modules <all-salt.runners> synchronously Wraps salt.runner.RunnerClient.cmd_sync(). Note that runner functions must be called using keyword arguments. Posi‐ tional arguments are not supported. Returns Returns the result from the runner module runner_async(fun, **kwargs) Run runner modules <all-salt.runners> asynchronously Wraps salt.runner.RunnerClient.cmd_async(). Note that runner functions must be called using keyword arguments. Posi‐ tional arguments are not supported. Returns event data and a job ID for the executed function. ssh(*args, **kwargs) Run salt-ssh commands synchronously Wraps salt.client.ssh.client.SSHClient.cmd_sync(). Returns Returns the result from the salt-ssh command ssh_async(fun, timeout=None, **kwargs) Run salt-ssh commands asynchronously Wraps salt.client.ssh.client.SSHClient.cmd_async(). Returns Returns the JID to check for results on wheel(fun, **kwargs) Run wheel modules synchronously Wraps salt.wheel.WheelClient.master_call(). Note that wheel functions must be called using keyword arguments. Posi‐ tional arguments are not supported. Returns Returns the result from the wheel module wheel_async(fun, **kwargs) Run wheel modules asynchronously Wraps salt.wheel.WheelClient.master_call(). Note that wheel functions must be called using keyword arguments. Posi‐ tional arguments are not supported. Returns Returns the result from the wheel module
SALT VIRT The Salt Virt cloud controller capability was initially added to Salt in version 0.14.0 as an alpha technology. The initial Salt Virt system supports core cloud operations: · Virtual machine deployment · Inspection of deployed VMs · Virtual machine migration · Network profiling · Automatic VM integration with all aspects of Salt · Image Pre-seeding Many features are currently under development to enhance the capabilities of the Salt Virt systems. NOTE: It is noteworthy that Salt was originally developed with the intent of using the Salt communication system as the backbone to a cloud controller. This means that the Salt Virt system is not an afterthought, simply a system that took the back seat to other development. The original attempt to develop the cloud control aspects of Salt was a project called butter. This project never took off, but was functional and proves the early viability of Salt to be a cloud controller. WARNING: Salt Virt does not work with KVM that is running in a VM. KVM must be running on the base hardware. Salt Virt Tutorial A tutorial about how to get Salt Virt up and running has been added to the tutorial sec‐ tion: Cloud Controller Tutorial The Salt Virt Runner The point of interaction with the cloud controller is the virt runner. The virt runner comes with routines to execute specific virtual machine routines. Reference documentation for the virt runner is available with the runner module documenta‐ tion: Virt Runner Reference Based on Live State Data The Salt Virt system is based on using Salt to query live data about hypervisors and then using the data gathered to make decisions about cloud operations. This means that no external resources are required to run Salt Virt, and that the information gathered about the cloud is live and accurate. Deploy from Network or Disk Virtual Machine Disk Profiles Salt Virt allows for the disks created for deployed virtual machines to be finely config‐ ured. The configuration is a simple data structure which is read from the config.option function, meaning that the configuration can be stored in the minion config file, the mas‐ ter config file, or the minion's pillar. This configuration option is called virt.disk. The default virt.disk data structure looks like this: virt.disk: default: - system: size: 8192 format: qcow2 model: virtio NOTE: The format and model does not need to be defined, Salt will default to the optimal for‐ mat used by the underlying hypervisor, in the case of kvm this it is qcow2 and virtio. This configuration sets up a disk profile called default. The default profile creates a single system disk on the virtual machine. Define More Profiles Many environments will require more complex disk profiles and may require more than one profile, this can be easily accomplished: virt.disk: default: - system: size: 8192 database: - system: size: 8192 - data: size: 30720 web: - system: size: 1024 - logs: size: 5120 This configuration allows for one of three profiles to be selected, allowing virtual machines to be created with different storage needs of the deployed vm. Virtual Machine Network Profiles Salt Virt allows for the network devices created for deployed virtual machines to be finely configured. The configuration is a simple data structure which is read from the config.option function, meaning that the configuration can be stored in the minion config file, the master config file, or the minion's pillar. This configuration option is called virt.nic. By default the virt.nic option is empty but defaults to a data structure which looks like this: virt.nic: default: eth0: bridge: br0 model: virtio NOTE: The model does not need to be defined, Salt will default to the optimal model used by the underlying hypervisor, in the case of kvm this model is virtio This configuration sets up a network profile called default. The default profile creates a single Ethernet device on the virtual machine that is bridged to the hypervisor's br0 interface. This default setup does not require setting up the virt.nic configuration, and is the reason why a default install only requires setting up the br0 bridge device on the hypervisor. Define More Profiles Many environments will require more complex network profiles and may require more than one profile, this can be easily accomplished: virt.nic: dual: eth0: bridge: service_br eth1: bridge: storage_br single: eth0: bridge: service_br triple: eth0: bridge: service_br eth1: bridge: storage_br eth2: bridge: dmz_br all: eth0: bridge: service_br eth1: bridge: storage_br eth2: bridge: dmz_br eth3: bridge: database_br dmz: eth0: bridge: service_br eth1: bridge: dmz_br database: eth0: bridge: service_br eth1: bridge: database_br This configuration allows for one of six profiles to be selected, allowing virtual machines to be created which attach to different network depending on the needs of the deployed vm.
UNDERSTANDING YAML The default renderer for SLS files is the YAML renderer. YAML is a markup language with many powerful features. However, Salt uses a small subset of YAML that maps over very com‐ monly used data structures, like lists and dictionaries. It is the job of the YAML ren‐ derer to take the YAML data structure and compile it into a Python data structure for use by Salt. Though YAML syntax may seem daunting and terse at first, there are only three very simple rules to remember when writing YAML for SLS files. Rule One: Indentation YAML uses a fixed indentation scheme to represent relationships between data layers. Salt requires that the indentation for each level consists of exactly two spaces. Do not use tabs. Rule Two: Colons Python dictionaries are, of course, simply key-value pairs. Users from other languages may recognize this data type as hashes or associative arrays. Dictionary keys are represented in YAML as strings terminated by a trailing colon. Values are represented by either a string following the colon, separated by a space: my_key: my_value In Python, the above maps to: {'my_key': 'my_value'} Alternatively, a value can be associated with a key through indentation. my_key: my_value NOTE: The above syntax is valid YAML but is uncommon in SLS files because most often, the value for a key is not singular but instead is a list of values. In Python, the above maps to: {'my_key': 'my_value'} Dictionaries can be nested: first_level_dict_key: second_level_dict_key: value_in_second_level_dict And in Python: { 'first_level_dict_key': { 'second_level_dict_key': 'value_in_second_level_dict' } } Rule Three: Dashes To represent lists of items, a single dash followed by a space is used. Multiple items are a part of the same list as a function of their having the same level of indentation. - list_value_one - list_value_two - list_value_three Lists can be the value of a key-value pair. This is quite common in Salt: my_dictionary: - list_value_one - list_value_two - list_value_three In Python, the above maps to: {'my_dictionary': ['list_value_one', 'list_value_two', 'list_value_three']} Learning More One easy way to learn more about how YAML gets rendered into Python data structures is to use an online YAML parser to see the Python output. One excellent choice for experimenting with YAML parsing is: http://yaml-online-parser.appspot.com/
MASTER TOPS SYSTEM In 0.10.4 the external_nodes system was upgraded to allow for modular subsystems to be used to generate the top file data for a highstate run on the master. The old external_nodes option has been removed. The master tops system contains a number of subsystems that are loaded via the Salt loader interfaces like modules, states, return‐ ers, runners, etc. Using the new master_tops option is simple: master_tops: ext_nodes: cobbler-external-nodes for Cobbler or: master_tops: reclass: inventory_base_uri: /etc/reclass classes_uri: roles for Reclass. It's also possible to create custom master_tops modules. These modules must go in a subdi‐ rectory called tops in the extension_modules directory. The extension_modules directory is not defined by default (the default /srv/salt/_modules will NOT work as of this release) Custom tops modules are written like any other execution module, see the source for the two modules above for examples of fully functional ones. Below is a degenerate example: /etc/salt/master: extension_modules: /srv/salt/modules master_tops: customtop: True /srv/salt/modules/tops/customtop.py: import logging import sys # Define the module's virtual name __virtualname__ = 'customtop' log = logging.getLogger(__name__) def __virtual__(): return __virtualname__ def top(**kwargs): log.debug('Calling top in customtop') return {'base': ['test']} salt minion state.show_top should then display something like: $ salt minion state.show_top minion ---------- base: - test
SALT SSH Getting Started Salt SSH is very easy to use, simply set up a basic roster file of the systems to connect to and run salt-ssh commands in a similar way as standard salt commands. · Salt ssh is considered production ready in version 2014.7.0 · Python is required on the remote system (unless using the -r option to send raw ssh com‐ mands) · On many systems, the salt-ssh executable will be in its own package, usually named salt-ssh · The Salt SSH system does not supercede the standard Salt communication systems, it sim‐ ply offers an SSH-based alternative that does not require ZeroMQ and a remote agent. Be aware that since all communication with Salt SSH is executed via SSH it is substantially slower than standard Salt with ZeroMQ. · At the moment fileserver operations must be wrapped to ensure that the relevant files are delivered with the salt-ssh commands. The state module is an exception, which com‐ piles the state run on the master, and in the process finds all the references to salt:// paths and copies those files down in the same tarball as the state run. How‐ ever, needed fileserver wrappers are still under development. Salt SSH Roster The roster system in Salt allows for remote minions to be easily defined. NOTE: See the Roster documentation for more details. Simply create the roster file, the default location is /etc/salt/roster: web1: 192.168.42.1 This is a very basic roster file where a Salt ID is being assigned to an IP address. A more elaborate roster can be created: web1: host: 192.168.42.1 # The IP addr or DNS hostname user: fred # Remote executions will be executed as user fred passwd: foobarbaz # The password to use for login, if omitted, keys are used sudo: True # Whether to sudo to root, not enabled by default web2: host: 192.168.42.2 NOTE: sudo works only if NOPASSWD is set for user in /etc/sudoers: fred ALL=(ALL) NOPASSWD: ALL Deploy ssh key for salt-ssh By default, salt-ssh will generate key pairs for ssh, the default path will be /etc/salt/pki/master/ssh/salt-ssh.rsa You can use ssh-copy-id, (the OpenSSH key deployment tool) to deploy keys to your servers. ssh-copy-id -i /etc/salt/pki/master/ssh/salt-ssh.rsa.pub @server.demo.com One could also create a simple shell script, named salt-ssh-copy-id.sh as follows: #!/bin/bash if [ -z $1 ]; then echo $0 @host.com exit 0 fi ssh-copy-id -i /etc/salt/pki/master/ssh/salt-ssh.rsa.pub $1 NOTE: Be certain to chmod +x salt-ssh-copy-id.sh. ./salt-ssh-copy-id.sh @server1.host.com ./salt-ssh-copy-id.sh @server2.host.com Once keys are successfully deployed, salt-ssh can be used to control them. Calling Salt SSH The salt-ssh command can be easily executed in the same way as a salt command: salt-ssh '*' test.ping Commands with salt-ssh follow the same syntax as the salt command. The standard salt functions are available! The output is the same as salt and many of the same flags are available. Please see http://docs.saltstack.com/ref/cli/salt-ssh.html for all of the available options. Raw Shell Calls By default salt-ssh runs Salt execution modules on the remote system, but salt-ssh can also execute raw shell commands: salt-ssh '*' -r 'ifconfig' States Via Salt SSH The Salt State system can also be used with salt-ssh. The state system abstracts the same interface to the user in salt-ssh as it does when using standard salt. The intent is that Salt Formulas defined for standard salt will work seamlessly with salt-ssh and vice-versa. The standard Salt States walkthroughs function by simply replacing salt commands with salt-ssh. Targeting with Salt SSH Due to the fact that the targeting approach differs in salt-ssh, only glob and regex tar‐ gets are supported as of this writing, the remaining target systems still need to be implemented. NOTE: By default, Grains are settable through salt-ssh. By default, these grains will not be persisted across reboots. See the "thin_dir" setting in Roster documentation for more details. Configuring Salt SSH Salt SSH takes its configuration from a master configuration file. Normally, this file is in /etc/salt/master. If one wishes to use a customized configuration file, the -c option to Salt SSH facilitates passing in a directory to look inside for a configuration file named master. Minion Config New in version 2015.5.1. Minion config options can be defined globally using the master configuration option ssh_minion_opts. It can also be defined on a per-minion basis with the minion_opts entry in the roster. Running Salt SSH as non-root user By default, Salt read all the configuration from /etc/salt/. If you are running Salt SSH with a regular user you have to modify some paths or you will get "Permission denied" mes‐ sages. You have to modify two parameters: pki_dir and cachedir. Those should point to a full path writable for the user. It's recommed not to modify /etc/salt for this purpose. Create a private copy of /etc/salt for the user and run the command with -c /new/config/path. Define CLI Options with Saltfile If you are commonly passing in CLI options to salt-ssh, you can create a Saltfile to auto‐ matically use these options. This is common if you're managing several different salt projects on the same server. So you can cd into a directory that has a Saltfile with the following YAML contents: salt-ssh: config_dir: path/to/config/dir max_procs: 30 wipe_ssh: True Instead of having to call salt-ssh --config-dir=path/to/config/dir --max-procs=30 --wipe \* test.ping you can call salt-ssh \* test.ping. Boolean-style options should be specified in their YAML representation. NOTE: The option keys specified must match the destination attributes for the options speci‐ fied in the parser salt.utils.parsers.SaltSSHOptionParser. For example, in the case of the --wipe command line option, its dest is configured to be wipe_ssh and thus this is what should be configured in the Saltfile. Using the names of flags for this option, being wipe: True or w: True, will not work. Debugging salt-ssh One common approach for debugging salt-ssh is to simply use the tarball that salt ships to the remote machine and call salt-call directly. To determine the location of salt-call, simply run salt-ssh with the -ldebug flag and look for a line containing the string, SALT_ARGV. This contains the salt-call command that salt-ssh attempted to execute. It is recommended that one modify this command a bit by removing the -l quiet, --metadata and --output json to get a better idea of what's going on on the target system.
SALT ROSTERS Salt rosters are pluggable systems added in Salt 0.17.0 to facilitate the salt-ssh system. The roster system was created because salt-ssh needs a means to identify which systems need to be targeted for execution. SEE ALSO: all-salt.roster NOTE: The Roster System is not needed or used in standard Salt because the master does not need to be initially aware of target systems, since the Salt Minion checks itself into the master. Since the roster system is pluggable, it can be easily augmented to attach to any existing systems to gather information about what servers are presently available and should be attached to by salt-ssh. By default the roster file is located at /etc/salt/roster. How Rosters Work The roster system compiles a data structure internally referred to as targets. The targets is a list of target systems and attributes about how to connect to said systems. The only requirement for a roster module in Salt is to return the targets data structure. Targets Data The information which can be stored in a roster target is the following: <Salt ID>: # The id to reference the target system with host: # The IP address or DNS name of the remote host user: # The user to log in as passwd: # The password to log in with # Optional parameters port: # The target system's ssh port number sudo: # Boolean to run command via sudo tty: # Boolean: Set this option to True if sudo is also set to # True and requiretty is also set on the target system priv: # File path to ssh private key, defaults to salt-ssh.rsa timeout: # Number of seconds to wait for response when establishing # an SSH connection minion_opts: # Dictionary of minion opts thin_dir: # The target system's storage directory for Salt # components. Defaults to /tmp/salt-<hash>. cmd_umask: # umask to enforce for the salt-call command. Should be in # octal (so for 0o077 in YAML you would do 0077, or 63) thin_dir Salt needs to upload a standalone environment to the target system, and this defaults to /tmp/salt-<hash>. This directory will be cleaned up per normal systems operation. If you need a persistent Salt environment, for instance to set persistent grains, this value will need to be changed.
REFERENCE Full list of builtin auth modules ┌──────────┬─────────────────────────────────┐ │auto │ An "Always Approved" eauth │ │ │ interface to test against, not │ │ │ intended for │ ├──────────┼─────────────────────────────────┤ │django │ Provide authentication using │ │ │ Django Web Framework │ ├──────────┼─────────────────────────────────┤ │keystone │ Provide authentication using │ │ │ OpenStack Keystone │ ├──────────┼─────────────────────────────────┤ │ldap │ Provide authentication using │ │ │ simple LDAP binds │ ├──────────┼─────────────────────────────────┤ │mysql │ Provide authentication using │ │ │ MySQL. │ ├──────────┼─────────────────────────────────┤ │pam │ Authenticate against PAM │ ├──────────┼─────────────────────────────────┤ │pki │ Authenticate via a PKI certifi‐ │ │ │ cate. │ ├──────────┼─────────────────────────────────┤ │rest │ Provide authentication using a │ │ │ REST call │ ├──────────┼─────────────────────────────────┤ │stormpath │ Provide authentication using │ │ │ Stormpath. │ ├──────────┼─────────────────────────────────┤ │yubico │ Provide authentication using │ │ │ YubiKey. │ └──────────┴─────────────────────────────────┘ salt.auth.auto An "Always Approved" eauth interface to test against, not intended for production use salt.auth.auto.auth(username, password) Authenticate! salt.auth.django Provide authentication using Django Web Framework depends · Django Web Framework Django authentication depends on the presence of the django framework in the PYTHONPATH, the Django project's settings.py file being in the PYTHONPATH and accessible via the DJANGO_SETTINGS_MODULE environment variable. Django auth can be defined like any other eauth module: external_auth: django: fred: - .* - '@runner' This will authenticate Fred via Django and allow him to run any execution module and all runners. The authorization details can optionally be located inside the Django database. The rele‐ vant entry in the models.py file would look like this: class SaltExternalAuthModel(models.Model): user_fk = models.ForeignKey(auth.User) minion_matcher = models.CharField() minion_fn = models.CharField() The external_auth clause in the master config would then look like this: external_auth: django: ^model: <fully-qualified reference to model class> When a user attempts to authenticate via Django, Salt will import the package indicated via the keyword ^model. That model must have the fields indicated above, though the model DOES NOT have to be named 'SaltExternalAuthModel'. salt.auth.django.auth(username, password) Simple Django auth salt.auth.django.django_auth_setup() Prepare the connection to the Django authentication framework salt.auth.django.retrieve_auth_entries(u=None) Parameters u -- Username to filter for Returns Dictionary that can be slotted into the __opts__ structure for eauth that designates the user associated ACL Database records such as: ┌───────────┬──────────────────────┬────────────────────┐ │username │ minion_or_fn_matcher │ minion_fn │ ├───────────┼──────────────────────┼────────────────────┤ │fred │ │ test.ping │ ├───────────┼──────────────────────┼────────────────────┤ │fred │ server1 │ network.interfaces │ ├───────────┼──────────────────────┼────────────────────┤ │fred │ server1 │ raid.list │ ├───────────┼──────────────────────┼────────────────────┤ │fred │ server2 │ .* │ ├───────────┼──────────────────────┼────────────────────┤ │guru │ .* │ │ ├───────────┼──────────────────────┼────────────────────┤ │smartadmin │ server1 │ .* │ └───────────┴──────────────────────┴────────────────────┘ Should result in an eauth config such as: fred: - test.ping - server1: - network.interfaces - raid.list - server2: - .* guru: - .* smartadmin: - server1: - .* salt.auth.keystone Provide authentication using OpenStack Keystone depends · keystoneclient Python module salt.auth.keystone.auth(username, password) Try and authenticate salt.auth.keystone.get_auth_url() Try and get the URL from the config, else return localhost salt.auth.ldap Provide authentication using simple LDAP binds depends · ldap Python module salt.auth.ldap.auth(username, password) Simple LDAP auth salt.auth.ldap.groups(username, **kwargs) Authenticate against an LDAP group Behavior is highly dependent on if Active Directory is in use. AD handles group membership very differently than OpenLDAP. See the External Authentication documentation for a thorough discussion of available parameters for customizing the search. OpenLDAP allows you to search for all groups in the directory and returns members of those groups. Then we check against the username entered. salt.auth.mysql Provide authentication using MySQL. When using MySQL as an authentication backend, you will need to create or use an existing table that has a username and a password column. To get started, create a simple table that holds just a username and a password. The pass‐ word field will hold a SHA256 checksum. CREATE TABLE `users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `username` varchar(25) DEFAULT NULL, `password` varchar(70) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=latin1; To create a user within MySQL, execute the following statement. INSERT INTO users VALUES (NULL, 'diana', SHA2('secret', 256)) mysql_auth: hostname: localhost database: SaltStack username: root password: letmein auth_sql: 'SELECT username FROM users WHERE username = "{0}" AND password = SHA2("{1}", ↲ 256)' The auth_sql contains the SQL that will validate a user to ensure they are correctly authenticated. This is where you can specify other SQL queries to authenticate users. Enable MySQL authentication. external_auth: mysql: damian: - test.* depends · MySQL-python Python module salt.auth.mysql.auth(username, password) Authenticate using a MySQL user table salt.auth.pam Authenticate against PAM Provides an authenticate function that will allow the caller to authenticate a user against the Pluggable Authentication Modules (PAM) on the system. Implemented using ctypes, so no compilation is necessary. There is one extra configuration option for pam. The pam_service that is authenticated against. This defaults to login auth.pam.service: login NOTE: PAM authentication will not work for the root user. The Python interface to PAM does not support authenticating as root. class salt.auth.pam.PamConv Wrapper class for pam_conv structure appdata_ptr Structure/Union member conv Structure/Union member class salt.auth.pam.PamHandle Wrapper class for pam_handle_t handle Structure/Union member class salt.auth.pam.PamMessage Wrapper class for pam_message structure msg Structure/Union member msg_style Structure/Union member class salt.auth.pam.PamResponse Wrapper class for pam_response structure resp Structure/Union member resp_retcode Structure/Union member salt.auth.pam.auth(username, password, **kwargs) Authenticate via pam salt.auth.pam.authenticate(username, password) Returns True if the given username and password authenticate for the given service. Returns False otherwise username: the username to authenticate password: the password in plain text salt.auth.pam.groups(username, *args, **kwargs) Retrieve groups for a given user for this auth provider Uses system groups salt.auth.pki Authenticate via a PKI certificate. NOTE: This module is Experimental and should be used with caution Provides an authenticate function that will allow the caller to authenticate a user via their public cert against a pre-defined Certificate Authority. TODO: Add a 'ca_dir' option to configure a directory of CA files, a la Apache. depends · pyOpenSSL module salt.auth.pki.auth(pem, **kwargs) Returns True if the given user cert was issued by the CA. Returns False otherwise. pem: a pem-encoded user public key (certificate) Configure the CA cert in the master config file: external_auth: pki: ca_file: /etc/pki/tls/ca_certs/trusted-ca.crt salt.auth.rest module Provide authentication using a REST call Django auth can be defined like any other eauth module: external_auth: rest: ^url: https://url/for/rest/call fred: - .* - '@runner' If there are entries underneath the ^url entry then they are merged with any responses from the REST call. In the above example, assuming the REST call does not return any additional ACLs, this will authenticate Fred via a REST call and allow him to run any exe‐ cution module and all runners. The REST call should return a JSON object that maps to a regular eauth YAML structure as above. salt.auth.rest.auth(username, password) REST authentication salt.auth.rest.rest_auth_setup() salt.auth.stormpath Provide authentication using Stormpath. This driver requires some extra configuration beyond that which Stormpath normally requires. stormpath: apiid: 1234567890 apikey: 1234567890/ABCDEF # Can use an application ID application: 6789012345 # Or can use a directory ID directory: 3456789012 # But not both New in version 2015.8.0. salt.auth.stormpath.auth(username, password) Authenticate using a Stormpath directory or application salt.auth.yubico Provide authentication using YubiKey. New in version 2015.5.0. depends yubico-client Python module To get your YubiKey API key you will need to visit the website below. https://upgrade.yubico.com/getapikey/ The resulting page will show the generated Client ID (aka AuthID or API ID) and the gener‐ ated API key (Secret Key). Make a note of both and use these two values in your /etc/salt/master configuration. /etc/salt/master yubico_users: damian: id: 12345 key: ABCDEFGHIJKLMNOPQRSTUVWXYZ external_auth: yubico: damian: - test.* Please wait five to ten minutes after generating the key before testing so that the API key will be updated on all the YubiCloud servers. salt.auth.yubico.auth(username, password) Authentcate against yubico server Command Line Reference Salt can be controlled by a command line client by the root user on the Salt master. The Salt command line client uses the Salt client API to communicate with the Salt master server. The Salt client is straightforward and simple to use. Using the Salt client commands can be easily sent to the minions. Each of these commands accepts an explicit --config option to point to either the master or minion configuration file. If this option is not provided and the default configura‐ tion file does not exist then Salt falls back to use the environment variables SALT_MAS‐ TER_CONFIG and SALT_MINION_CONFIG. SEE ALSO: Configuration Using the Salt Command The Salt command needs a few components to send information to the Salt minions. The tar‐ get minions need to be defined, the function to call and any arguments the function requires. Defining the Target Minions The first argument passed to salt, defines the target minions, the target minions are accessed via their hostname. The default target type is a bash glob: salt '*foo.com' sys.doc Salt can also define the target minions with regular expressions: salt -E '.*' cmd.run 'ls -l | grep foo' Or to explicitly list hosts, salt can take a list: salt -L foo.bar.baz,quo.qux cmd.run 'ps aux | grep foo' More Powerful Targets The simple target specifications, glob, regex, and list will cover many use cases, and for some will cover all use cases, but more powerful options exist. Targeting with Grains The Grains interface was built into Salt to allow minions to be targeted by system proper‐ ties. So minions running on a particular operating system can be called to execute a func‐ tion, or a specific kernel. Calling via a grain is done by passing the -G option to salt, specifying a grain and a glob expression to match the value of the grain. The syntax for the target is the grain key followed by a globexpression: "os:Arch*". salt -G 'os:Fedora' test.ping Will return True from all of the minions running Fedora. To discover what grains are available and what the values are, execute the grains.item salt function: salt '*' grains.items more info on using targeting with grains can be found here. Targeting with Executions As of 0.8.8 targeting with executions is still under heavy development and this documenta‐ tion is written to reference the behavior of execution matching in the future. Execution matching allows for a primary function to be executed, and then based on the return of the primary function the main function is executed. Execution matching allows for matching minions based on any arbitrary running data on the minions. Compound Targeting New in version 0.9.5. Multiple target interfaces can be used in conjunction to determine the command targets. These targets can then be combined using and or or statements. This is well defined with an example: salt -C 'G@os:Debian and webser* or E@db.*' test.ping In this example any minion who's id starts with webser and is running Debian, or any min‐ ion who's id starts with db will be matched. The type of matcher defaults to glob, but can be specified with the corresponding letter followed by the @ symbol. In the above example a grain is used with G@ as well as a regu‐ lar expression with E@. The webser* target does not need to be prefaced with a target type specifier because it is a glob. more info on using compound targeting can be found here. Node Group Targeting New in version 0.9.5. For certain cases, it can be convenient to have a predefined group of minions on which to execute commands. This can be accomplished using what are called nodegroups. Nodegroups allow for predefined compound targets to be declared in the master configuration file, as a sort of shorthand for having to type out complicated compound expressions. nodegroups: group1: '@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com' group2: 'G@os:Debian and foo.domain.com' group3: 'G@os:Debian and N@group1' Calling the Function The function to call on the specified target is placed after the target specification. New in version 0.9.8. Functions may also accept arguments, space-delimited: salt '*' cmd.exec_code python 'import sys; print sys.version' Optional, keyword arguments are also supported: salt '*' pip.install salt timeout=5 upgrade=True They are always in the form of kwarg=argument. Arguments are formatted as YAML: salt '*' cmd.run 'echo "Hello: $FIRST_NAME"' env='{FIRST_NAME: "Joe"}' Note: dictionaries must have curly braces around them (like the env keyword argument above). This was changed in 0.15.1: in the above example, the first argument used to be parsed as the dictionary {'echo "Hello': '$FIRST_NAME"'}. This was generally not the expected behavior. If you want to test what parameters are actually passed to a module, use the test.arg_repr command: salt '*' test.arg_repr 'echo "Hello: $FIRST_NAME"' env='{FIRST_NAME: "Joe"}' Finding available minion functions The Salt functions are self documenting, all of the function documentation can be retried from the minions via the sys.doc() function: salt '*' sys.doc Compound Command Execution If a series of commands needs to be sent to a single target specification then the com‐ mands can be sent in a single publish. This can make gathering groups of information faster, and lowers the stress on the network for repeated commands. Compound command execution works by sending a list of functions and arguments instead of sending a single function and argument. The functions are executed on the minion in the order they are defined on the command line, and then the data from all of the commands are returned in a dictionary. This means that the set of commands are called in a predictable way, and the returned data can be easily interpreted. Executing compound commands if done by passing a comma delimited list of functions, fol‐ lowed by a comma delimited list of arguments: salt '*' cmd.run,test.ping,test.echo 'cat /proc/cpuinfo',,foo The trick to look out for here, is that if a function is being passed no arguments, then there needs to be a placeholder for the absent arguments. This is why in the above exam‐ ple, there are two commas right next to each other. test.ping takes no arguments, so we need to add another comma, otherwise Salt would attempt to pass "foo" to test.ping. If you need to pass arguments that include commas, then make sure you add spaces around the commas that separate arguments. For example: salt '*' cmd.run,test.ping,test.echo 'echo "1,2,3"' , , foo You may change the arguments separator using the --args-separator option: salt --args-separator=:: '*' some.fun,test.echo params with , comma :: foo CLI Completion Shell completion scripts for the Salt CLI are available in the pkg Salt source directory. salt-call salt-call Synopsis salt-call [options] Description The salt-call command is used to run module functions locally on a minion instead of exe‐ cuting them from the master. Salt-call is used to run a Standalone Minion, and was origi‐ nally created for troubleshooting. The Salt Master is contacted to retrieve state files and other resources during execution unless the --local option is specified. NOTE: salt-call commands execute from the current user's shell context, while salt commands execute from the system's default context. Options --version Print the version of Salt that is running. --versions-report Show program's dependencies and version number, and then exit -h, --help Show the help message and exit -c CONFIG_DIR, --config-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the con‐ figuration files for Salt master and minions. The default location on most systems is /etc/salt. --hard-crash Raise any original exception rather than exiting gracefully Default: False -g, --grains Return the information generated by the Salt grains -m MODULE_DIRS, --module-dirs=MODULE_DIRS Specify an additional directory to pull modules from. Multiple directories can be provided by passing -m /--module-dirs multiple times. -d, --doc, --documentation Return the documentation for the specified module or for all modules if none are specified --master=MASTER Specify the master to use. The minion must be authenticated with the master. If this option is omitted, the master options from the minion config will be used. If multi masters are set up the first listed master that responds will be used. --return RETURNER Set salt-call to pass the return data to one or many returner interfaces. To use many returner interfaces specify a comma delimited list of returners. --local Run salt-call locally, as if there was no master running. --file-root=FILE_ROOT Set this directory as the base file root. --pillar-root=PILLAR_ROOT Set this directory as the base pillar root. --retcode-passthrough Exit with the salt call retcode and not the salt binary retcode --metadata Print out the execution metadata as well as the return. This will print out the outputter data, the return code, etc. --id=ID Specify the minion id to use. If this option is omitted, the id option from the minion config will be used. --skip-grains Do not load grains. --refresh-grains-cache Force a refresh of the grains cache Logging Options Logging options which override any settings defined on the configuration files. -l LOG_LEVEL, --log-level=LOG_LEVEL Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: info. --log-file=LOG_FILE Log file path. Default: /var/log/salt/minion. --log-file-level=LOG_LEVEL_LOGFILE Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: info. Output Options --out Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters: grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data. If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module. NOTE: If using --out=json, you will probably want --static as well. Without the static option, you will get a separate JSON string per minion which makes JSON output invalid as a whole. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well. --out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation. --out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE Write the output to the specified file. --no-color Disable all colored output --force-color Force colored output NOTE: When using colored output the color codes are as follows: green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration. See also salt(1) salt-master(1) salt-minion(1) salt salt Synopsis salt '*' [ options ] sys.doc salt -E '.*' [ options ] sys.doc cmd salt -G 'os:Arch.*' [ options ] test.ping salt -C 'G@os:Arch.* and webserv* or G@kernel:FreeBSD' [ options ] test.ping Description Salt allows for commands to be executed across a swath of remote systems in parallel. This means that remote systems can be both controlled and queried with ease. Options --version Print the version of Salt that is running. --versions-report Show program's dependencies and version number, and then exit -h, --help Show the help message and exit -c CONFIG_DIR, --config-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the con‐ figuration files for Salt master and minions. The default location on most systems is /etc/salt. -t TIMEOUT, --timeout=TIMEOUT The timeout in seconds to wait for replies from the Salt minions. The timeout num‐ ber specifies how long the command line client will wait to query the minions and check on running jobs. Default: 5 -s, --static By default as of version 0.9.8 the salt command returns data to the console as it is received from minions, but previous releases would return data only after all data was received. Use the static option to only return the data with a hard time‐ out and after all minions have returned. Without the static option, you will get a separate JSON string per minion which makes JSON output invalid as a whole. --async Instead of waiting for the job to run on minions only print the job id of the started execution and complete. --state-output=STATE_OUTPUT New in version 0.17. Override the configured state_output value for minion output. One of full, terse, mixed, changes or filter. Default: full. --subset=SUBSET Execute the routine on a random subset of the targeted minions. The minions will be verified that they have the named function before executing. -v VERBOSE, --verbose Turn on verbosity for the salt call, this will cause the salt command to print out extra data like the job id. --hide-timeout Instead of showing the return data for all minions. This option prints only the online minions which could be reached. -b BATCH, --batch-size=BATCH Instead of executing on all targeted minions at once, execute on a progressive set of minions. This option takes an argument in the form of an explicit number of min‐ ions to execute at once, or a percentage of minions to execute on. -a EAUTH, --auth=EAUTH Pass in an external authentication medium to validate against. The credentials will be prompted for. The options are auto, keystone, ldap, pam, and stormpath. Can be used with the -T option. -T, --make-token Used in conjunction with the -a option. This creates a token that allows for the authenticated user to send commands without needing to re-authenticate. --return=RETURNER Choose an alternative returner to call on the minion, if an alternative returner is used then the return will not come back to the command line but will be sent to the specified return system. The options are carbon, cassandra, couchbase, couchdb, elasticsearch, etcd, hipchat, local, local_cache, memcache, mongo, mysql, odbc, postgres, redis, sentry, slack, sms, smtp, sqlite3, syslog, and xmpp. -d, --doc, --documentation Return the documentation for the module functions available on the minions --args-separator=ARGS_SEPARATOR Set the special argument used as a delimiter between command arguments of compound commands. This is useful when one wants to pass commas as arguments to some of the commands in a compound command. Logging Options Logging options which override any settings defined on the configuration files. -l LOG_LEVEL, --log-level=LOG_LEVEL Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning. --log-file=LOG_FILE Log file path. Default: /var/log/salt/master. --log-file-level=LOG_LEVEL_LOGFILE Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning. Target Selection -E, --pcre The target expression will be interpreted as a PCRE regular expression rather than a shell glob. -L, --list The target expression will be interpreted as a comma-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux -G, --grain The target expression matches values returned by the Salt grains system on the min‐ ions. The target expression is in the format of '<grain value>:<glob expression>'; example: 'os:Arch*' This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the --grain-pcre option. --grain-pcre The target expression matches values returned by the Salt grains system on the min‐ ions. The target expression is in the format of '<grain value>:< regular expres‐ sion>'; example: 'os:Arch.*' -N, --nodegroup Use a predefined compound target defined in the Salt master configuration file. -R, --range Instead of using shell globs to evaluate the target, use a range expression to identify targets. Range expressions look like %cluster. Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file. -C, --compound Utilize many target definitions to make the call very granular. This option takes a group of targets separated by and or or. The default matcher is a glob as usual. If something other than a glob is used, preface it with the letter denoting the type; example: 'webserv* and G@os:Debian or E@db*' Make sure that the compound target is encapsulated in quotes. -I, --pillar Instead of using shell globs to evaluate the target, use a pillar value to identify targets. The syntax for the target is the pillar key followed by a glob expression: "role:production*" -S, --ipcidr Match based on Subnet (CIDR notation) or IPv4 address. Output Options --out Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters: grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data. If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module. NOTE: If using --out=json, you will probably want --static as well. Without the static option, you will get a separate JSON string per minion which makes JSON output invalid as a whole. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well. --out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation. --out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE Write the output to the specified file. --no-color Disable all colored output --force-color Force colored output NOTE: When using colored output the color codes are as follows: green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration. See also salt(7) salt-master(1) salt-minion(1) salt-cloud salt-cp salt-cp Copy a file to a set of systems Synopsis salt-cp '*' [ options ] SOURCE DEST salt-cp -E '.*' [ options ] SOURCE DEST salt-cp -G 'os:Arch.*' [ options ] SOURCE DEST Description Salt copy copies a local file out to all of the Salt minions matched by the given target. Note: salt-cp uses salt's publishing mechanism. This means the privacy of the contents of the file on the wire is completely dependent upon the transport in use. In addition, if the salt-master is running with debug logging it is possible that the contents of the file will be logged to disk. Options --version Print the version of Salt that is running. --versions-report Show program's dependencies and version number, and then exit -h, --help Show the help message and exit -c CONFIG_DIR, --config-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the con‐ figuration files for Salt master and minions. The default location on most systems is /etc/salt. -t TIMEOUT, --timeout=TIMEOUT The timeout in seconds to wait for replies from the Salt minions. The timeout num‐ ber specifies how long the command line client will wait to query the minions and check on running jobs. Default: 5 Logging Options Logging options which override any settings defined on the configuration files. -l LOG_LEVEL, --log-level=LOG_LEVEL Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning. --log-file=LOG_FILE Log file path. Default: /var/log/salt/master. --log-file-level=LOG_LEVEL_LOGFILE Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning. Target Selection -E, --pcre The target expression will be interpreted as a PCRE regular expression rather than a shell glob. -L, --list The target expression will be interpreted as a comma-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux -G, --grain The target expression matches values returned by the Salt grains system on the min‐ ions. The target expression is in the format of '<grain value>:<glob expression>'; example: 'os:Arch*' This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the --grain-pcre option. --grain-pcre The target expression matches values returned by the Salt grains system on the min‐ ions. The target expression is in the format of '<grain value>:< regular expres‐ sion>'; example: 'os:Arch.*' -N, --nodegroup Use a predefined compound target defined in the Salt master configuration file. -R, --range Instead of using shell globs to evaluate the target, use a range expression to identify targets. Range expressions look like %cluster. Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file. See also salt(1) salt-master(1) salt-minion(1) salt-key salt-key Synopsis salt-key [ options ] Description Salt-key executes simple management of Salt server public keys used for authentication. On initial connection, a Salt minion sends its public key to the Salt master. This key must be accepted using the salt-key command on the Salt master. Salt minion keys can be in one of the following states: · unaccepted: key is waiting to be accepted. · accepted: key was accepted and the minion can communicate with the Salt master. · rejected: key was rejected using the salt-key command. In this state the minion does not receive any communication from the Salt master. · denied: key was rejected automatically by the Salt master. This occurs when a minion has a duplicate ID, or when a minion was rebuilt or had new keys generated and the pre‐ vious key was not deleted from the Salt master. In this state the minion does not receive any communication from the Salt master. To change the state of a minion key, use -d to delete the key and then accept or reject the key. Options --version Print the version of Salt that is running. --versions-report Show program's dependencies and version number, and then exit -h, --help Show the help message and exit -c CONFIG_DIR, --config-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the con‐ figuration files for Salt master and minions. The default location on most systems is /etc/salt. -u USER, --user=USER Specify user to run salt-key --hard-crash Raise any original exception rather than exiting gracefully. Default is False. -q, --quiet Suppress output -y, --yes Answer 'Yes' to all questions presented, defaults to False --rotate-aes-key=ROTATE_AES_KEY Setting this to False prevents the master from refreshing the key session when keys are deleted or rejected, this lowers the security of the key deletion/rejection operation. Default is True. Logging Options Logging options which override any settings defined on the configuration files. --log-file=LOG_FILE Log file path. Default: /var/log/salt/minion. --log-file-level=LOG_LEVEL_LOGFILE Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning. Output Options --out Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters: grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data. If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module. NOTE: If using --out=json, you will probably want --static as well. Without the static option, you will get a separate JSON string per minion which makes JSON output invalid as a whole. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well. --out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation. --out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE Write the output to the specif