Tutorials

Use Terraform and SaltStack for Provisioning and Configuration Management

Table of Contents

Introduction

This article provides a high-level overview about how to provision and configure a complete ProfitBricks virtual data center (VDC), using Terraform and SaltStack as provisioning and configuration tools.

Terraform

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Configuration files describe to Terraform the components needed to run a single application or an entire datacenter. The configuration files are described in a declarative style, therefore Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

SaltStack

SaltStack or simply Salt, focuses more on the configuration management of already existing servers, capable of maintaining remote nodes in defined states (for example, ensuring that specific packages are installed and specific services are running). It also includes a distributed remote execution system used to execute commands and query data on remote nodes, either individually or by arbitrary selection criteria. Salt functions on a master/minion topology. A master server acts as a central control bus for the clients, which are called minions. The minions connect back to the master. Whereas SaltStack primarily focuses on configuration management, it also includes a module focused on provisioning virtual machine in the cloud, called Salt Cloud.

Showcase architecture

The goals of this showcase are:

  • Fully automated provisioning of a virtual data center, including necessary management systems (e.g. bastion system, Salt Master, firewall)
  • Utilize Terraform for provisioning of initial management systems and all further virtual machines (e.g. web servers and databases)
  • Utilize Salt for the configuration management of all provisioned virtual machines

While both tools certainly overlap in their functionality in some areas and one could achieve similar results using only one of the tools, this article aims to highlight the strengths of each tool. As the example used in this tutorial is a rather basic one, the provisioning part could be implemented using Salt Cloud. (ProfitBricks also offers a plugin for this component.) However, as soon as one would like to build up a rather complex scenario, including multiple isolated networks or multiple volumes per server, Salt Cloud would not be the fitting instrument anymore. In addition, as these tools mature and new functionality is added, we are prepared for implementation of these new features, not limiting ourselves to only elementary provisioning functionality provided by Salt Cloud.

Since the infrastructure is defined in configuration files using a specific programming language, this is referred to as “Infrastructure as Code” (IaC). Some benefits of using a IaC and a central configuration management system are:

  • Implement disaster recovery within seconds: Code can very easily be duplicated so that a 100% identical second (disaster recovery) site can be created with only one line of code change!
  • Reuse the state of the infrastructure any number of times by simply re-running the deployment scripts with the same inputs. This allows you to easily manage multiple similar platforms e.g. testing, staging and production platform with minimal configuration effort.
  • Bring your infrastructure under revision control: This allows you to build a truly continuous delivery pipeline and use agile development methods also for your infrastructure. Additionally, by reverting to the desired commit and re-running our deployment scripts, we can restore the state of the infrastructure as it was on any given day.
  • Configure and manage your infrastructure on your local machine or in some central repository and test any changes before pushing them on your production platform.

In order to demonstrate the strengths of each tool, the following architecture is implemented using only Terraform and Salt:

DCD layout created with Terraform and Salt

The following design principles are applied:

  • Web servers are not directly connected to the internet but are provisioned in a DMZ LAN
  • Public access to web servers is controlled by a firewall
  • Web and database servers are separated also on the network layer
  • A dedicated bastion or jump server is used to to access Salt Master.
  • Management systems (e.g. Salt Master) are not directly connected to the internet but are provisioned inside a dedicated management LAN
  • Configuration of all systems is not done in public LAN but a own management LAN is chosen

All machines are deployed inside one ProfitBricks virtual data center. The virtual data center does not need to be created in advance, it is part of the Terraform config and will be automatically created. This means in order to implement this example, all you need are your ProfitBricks DCD login credentials.

All virtual machines are based on the latest CentOS image offered by ProfitBricks. The only exception to this is the firewall. I decided to use the Untangle NG Firewall (https://www.untangle.com/get-untangle/). A snapshot of an already existing Untangle Firewall was created and used in this showcase. The configuration of the Untangle NG Firewall is also not being managed by Salt but managed manually.

Install and configure Terraform

Terraform is installed and running on my local machine and all config files are written and saved locally. Optionally, all files may be managed by a version control system such as git, in order to have revision control and saved in a central repository in order to allow collaboration. The ProfitBricks plugin for Terraform is installed automatically as soon as the provider “profitbricks” is referenced in the Terraform config file. For a detailed description please see https://www.terraform.io/docs/providers/profitbricks/.

In order to maintain a good overview of all systems of the virtual data center, I decided to create a separate Terraform config file for each system or resource. This is not mandatory, as all resources could also be written in one large config file. The following section describes all resource and systems managed by Terraform.

Variables

Terraform supports the concepts of variables stored in a separate variables.tf file. They serve as input variables for other module config files and parameterize the configurations.

The following variables.tf is being used:

variable "pb_user" {
  description = "Username for basic authentication of API"
  default     = "benjamin.schmidt@profitbricks.com"
}

variable "pb_password" {
  description = "Password for basic authentication of API"
  default     = "<password>"
}

variable "console_password" {
  description = "Password for root user via console"
  default     = "<password>"
}

variable "ssh_public_keys" {
  description = "List of SSH keys to be added to the VMs"
  default     = [".ssh/id_rsa.pub",]
}

variable "ssh_private_key" {
  description = "Private SSH key to connect to VMs"
  default     = ".ssh/id_rsa"
}

Virtual Data Center

The provider “profitbricks” is first referenced in the virtual data center config file vdc.tf:

provider "profitbricks" {
 username = "${var.pb_user}"
 password = "${var.pb_password}"
}

///////////////////////////////////////////////////////////
// Virtual Data Center
///////////////////////////////////////////////////////////

resource "profitbricks_datacenter" "dev-01" {
 name        = "dev-01"
 location    = "de/fra"
 description = "VDC managed by Terraform - do not edit manually"
}

Here you can see the usage of variables for the parameter username and password. The chosen location of the datacenter is Frankfurt, Germany but you are free to use any of the other locations as well (“de/fkb”, “us/las”, “us/ewr”).

Networks

Next is the definition of all networks:

///////////////////////////////////////////////////////////
// Public LAN
///////////////////////////////////////////////////////////

resource "profitbricks_lan" "public_lan" {
  datacenter_id = "${profitbricks_datacenter.dev-01.id}"
  public        = true
  name          = "publicLAN"
}

///////////////////////////////////////////////////////////
// DMZ LAN
///////////////////////////////////////////////////////////

resource "profitbricks_lan" "dmz_lan" {
  datacenter_id = "${profitbricks_datacenter.dev-01.id}"
  public        = false
  name          = "dmzLAN"
}

///////////////////////////////////////////////////////////
// Management  LAN
///////////////////////////////////////////////////////////

resource "profitbricks_lan" "mgm_lan" {
  datacenter_id = "${profitbricks_datacenter.dev-01.id}"
  public        = false
  name          = "mgmLAN"
}

///////////////////////////////////////////////////////////
// Data LAN
///////////////////////////////////////////////////////////

resource "profitbricks_lan" "data_lan" {
  datacenter_id = "${profitbricks_datacenter.dev-01.id}"
  public        = false
  name          = "dataLAN"
}
  • The public LAN is the only network facing the internet.
  • The DMZ LAN (demilitarized zone) is where all servers are deployed that need to be accessed from the internet. In this setup, these are the web servers.
  • The data LAN is the network where any backend systems such as application or database servers are deployed. In this setup, these are the database servers
  • The management LAN is interconnecting all systems and it is the network via which the Salt configuration is performed.

Firewall

The first virtual machine being built is the firewall. As mentioned above, I decided to use a Untangle NG Firewall version 14, of which I created a snapshot prior to provisioning the server via Terraform. The snapshot is referenced by its UUID.

///////////////////////////////////////////////////////////
// Firewall
///////////////////////////////////////////////////////////

resource "profitbricks_server" "fw-01" {
  name              = "fw-01"
  datacenter_id     = "${profitbricks_datacenter.dev-01.id}"
  cores             = 2
  ram               = 2048
  cpu_family        = "AMD_OPTERON"
  availability_zone = "ZONE_1"

  volume {
    name              = "fw-01-system"
    image_name        = "3a0656d6-4d83-4d8d-acd1-62ec7b779b4b"
    size              = 10
    disk_type         = "HDD"
   availability_zone = "AUTO"
  }

  nic {
    name = "public"
    lan  = "${profitbricks_lan.public_lan.id}"
    dhcp = true
  }
}

///////////////////////////////////////////////////////////
// DMZ NIC fw-01
///////////////////////////////////////////////////////////

resource "profitbricks_nic" "fw-01_dmz_nic" {
  datacenter_id = "${profitbricks_datacenter.dev-01.id}"
  server_id     = "${profitbricks_server.fw-01.id}"
  lan           = "${profitbricks_lan.dmz_lan.id}"
  name          = "dmzNIC"
 dhcp          = true
}

///////////////////////////////////////////////////////////
// Data NIC fw-01
///////////////////////////////////////////////////////////

resource "profitbricks_nic" "fw-01_data_nic" {
  datacenter_id = "${profitbricks_datacenter.dev-01.id}"
  server_id     = "${profitbricks_server.fw-01.id}"
  lan           = "${profitbricks_lan.data_lan.id}"
  name          = "dataNIC"
 dhcp          = true
}

///////////////////////////////////////////////////////////
// Management NIC fw-01
///////////////////////////////////////////////////////////

resource "profitbricks_nic" "fw-01_mgm_nic" {
  datacenter_id = "${profitbricks_datacenter.dev-01.id}"
  server_id     = "${profitbricks_server.fw-01.id}"
  lan           = "${profitbricks_lan.mgm_lan.id}"
  name          = "mgmNIC"
 dhcp          = true
}

The firewall contains four resources: the firewall virtual machine (containing one volume and one nic) plus three additional NICs. Since a server resource can only contain one “in-line” NIC resource, any additional NIC must be configured as a separate resource.

This provisions and boots the Untangle firewall. Configuration of the firewall will need to be done manually, as it cannot be managed by Salt. The firewall will initialize a setup wizard, that needs to be accessed via the remote console immediately after booting. Make sure to map the correct networks to the corresponding NICs (i.e. compare MAC addresses in DCD and Untangle).

Bastion Server

The second system built up is the bastion server:

///////////////////////////////////////////////////////////
// Bastion server
///////////////////////////////////////////////////////////

resource "profitbricks_server" "bastion" {
  name              = "bastion"
  datacenter_id     = "${profitbricks_datacenter.dev-01.id}"
  cores             = 1
  ram               = 1024
  cpu_family        = "AMD_OPTERON"
  availability_zone = "ZONE_1"

  volume {
    name              = "bastion-system"
    image_name        = "centos:latest"
    size              = 5
    disk_type         = "HDD"
    availability_zone = "AUTO"
    ssh_key_path      = ["${var.ssh_public_keys}"]
   image_password    = "${var.console_password}"
  }

  nic {
    name = "public"
    lan  = "${profitbricks_lan.public_lan.id}"
    dhcp = true
  }

  provisioner "remote-exec" {
    inline = [
      "echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf",
      "sysctl -p",
      "firewall-cmd --permanent --direct --passthrough ipv4 -t nat -I POSTROUTING -o eth0 -j MASQUERADE",
      "firewall-cmd --permanent --direct --passthrough ipv4 -I FORWARD -i eth1 -j ACCEPT",
      "firewall-cmd --reload",
    ]

    connection {
      private_key = "${file(var.ssh_private_key)}"
      host        = "${profitbricks_server.bastion.primary_ip}"
      user        = "root"
    }
  }
}

///////////////////////////////////////////////////////////
// Private NIC bastion server
///////////////////////////////////////////////////////////

resource "profitbricks_nic" "bastion_mgm_nic" {
  datacenter_id = "${profitbricks_datacenter.dev-01.id}"
  server_id     = "${profitbricks_server.bastion.id}"
  lan           = "${profitbricks_lan.mgm_lan.id}"
  name          = "mgmNIC"
  dhcp          = true
}

This is the first time a Terraform “provisioner” is used. Provisioners are used to execute scripts on a local or remote machine as part of resource creation or destruction. They can be used to bootstrap a resource, cleanup before destroy, run configuration management etc.

The bastion server must act as a gateway, so we need to enable IP forwarding, allow the local firewall to forward this traffic, as well as allow NAT for outgoing traffic. In order to configure the bastion server as such, we are using the “remote-exec” provisioner, which invokes one or multiple commands or scripts inside the remote machine.

The remote-exec provisioner needs to have SSH access to the remote resource. This is defined in the “connection” section.

Salt Master

After the bastion server is provisioned and configured to act as a gateway, the Salt Master can be provisioned:

//////////////////////////////////////////
// Salt Master
//////////////////////////////////////////

resource "profitbricks_server" "saltmaster" {
  depends_on        = ["profitbricks_nic.bastion_mgm_nic"]
  name              = "master"
  datacenter_id     = "${profitbricks_datacenter.dev-01.id}"
  cores             = 1
  ram               = 1024
  availability_zone = "ZONE_1"
  cpu_family        = "AMD_OPTERON"
  licence_type      = "LINUX"

  volume = [
    {
      name           = "salt-system"
      image_name     = "centos:latest"
      size           = 20
      disk_type      = "HDD"
      ssh_key_path   = ["${var.ssh_public_keys}"]
      image_password = "${var.console_password}"
      licence_type   = "LINUX"
    },
  ]

  nic = [
    {
      lan  = "${profitbricks_lan.mgm_lan.id}"
      dhcp = true
      name = "mgmNIC"
    },
  ]

  connection {
    private_key         = "${file(var.ssh_private_key)}"
    bastion_host        = "${profitbricks_server.bastion.primary_ip}"
    bastion_user        = "root"
    bastion_private_key = "${file(var.ssh_private_key)}"
    timeout             = "4m"
  }
}

///////////////////////////////////////////////////////////
// Salt master config
///////////////////////////////////////////////////////////
resource "null_resource" "saltmaster_config" {
  depends_on = ["profitbricks_server.saltmaster"]

  triggers = {
    saltmasterid = "${profitbricks_server.saltmaster.id}"
  }

  connection {
    private_key         = "${file(var.ssh_private_key)}"
    host                = "${profitbricks_server.saltmaster.primary_ip}"
    bastion_host        = "${profitbricks_server.bastion.primary_ip}"
    bastion_user        = "root"
    bastion_private_key = "${file(var.ssh_private_key)}"
    timeout             = "4m"
  }

 # add salt master to hosts file
 provisioner "local-exec" {
   command = "grep -q '${profitbricks_server.saltmaster.name}' salt/srv/salt/common/hosts && sed -i '' 's/^${profitbricks_server.saltmaster.primary_ip}.*/${profitbricks_server.saltmaster.primary_ip} ${profitbricks_server.saltmaster.name}/' salt/srv/salt/common/hosts || echo '${profitbricks_server.saltmaster.primary_ip} ${profitbricks_server.saltmaster.name}' >> salt/srv/salt/common/hosts"
 }

 # make the magic happen on salt master
 provisioner "remote-exec" {
   inline = [
     "echo master > /etc/hostname",
     "hostnamectl set-hostname master --static",
     "hostnamectl set-hostname master --pretty",
     "hostnamectl set-hostname master --transient",
     "echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf",
     "sysctl -p",

     "ip route add default via ${profitbricks_nic.bastion_mgm_nic.ips.0}",

     "echo 'supersede routers ${profitbricks_nic.bastion_mgm_nic.ips.0};' >> /etc/dhcp/dhclient.conf",
     "echo nameserver 8.8.8.8 > /etc/resolv.conf",
     "firewall-cmd --permanent --add-port=4505-4506/tcp",
     "firewall-cmd --reload",
     "echo -e  'y\n'| ssh-keygen -b 2048 -t rsa -P '' -f /root/.ssh/id_rsa -q",
     "wget -O /tmp/bootstrap-salt.sh https://bootstrap.saltstack.com",
     "sh /tmp/bootstrap-salt.sh -M -L -X -A master",
     "mkdir -p /etc/salt/pki/master/minions",
     "salt-key --gen-keys=minion --gen-keys-dir=/etc/salt/pki/minion",
     "cp /etc/salt/pki/minion/minion.pub /etc/salt/pki/master/minions/master",
     "mkdir /srv/salt",

     "systemctl start salt-master",
     "systemctl start salt-minion",
     "systemctl enable salt-master",
     "systemctl enable salt-minion",
     "sleep 10",
     "salt '*' test.ping",
   ]
 }

Since the Salt Master is not directly accessible from the internet but only via the bastion server, the resource must be provisioned after the bastion server has access to the management LAN. This dependency is not automatically visible to Terraform, therefore an explicit dependency must the created. This is accomplished by using the Terraform “depends_on” parameter.

We must also tell Terraform to connect to the Salt Master via the bastion server. This is again done in the “connection” section.

The above example introduces a “null_resource” resource. The null resource is not explicitly dependant on any other resource and so it can be executed or reprovided also independently from any other resource. This is useful for basic configuration of the remote machine, so the machine config can be performed independently from the machine provisioning. However, the correct execution order must be made known to Terraform, hence the parameter “depends_on”. Terraform also needs to know when to create a new null resource. This is defined by the parameter “triggers”.

Salt Master and all Minions must be able to resolve its hostnames. The easiest way to do this is add all IP addresses to /etc/hosts and bring this file under the management of Salt. The Salt configuration path is on my local machine and copied via a dedicated Terraform resource to the Salt Master whenever a config change is required. So in order to update /etc/hosts on all systems, the local file salt/srv/salt/common/hosts must be updated. This is being accomplished by using the “local-exec” provisioner. This provisioner invokes one or multiple commands or scripts inside the local Terraform machine. In this example, the local-exec provisioner adds or overwrites the IP address and hostname of the Salt Master management NIC to the file salt/srv/salt/common/hosts. This file is part of the Salt configuration path and is therefore managed by Salt.

The actual installation of Salt is performed using the Salt Bootstrap script, which allows for a user to install the Salt Minion or Master on a variety of system distributions and versions. This shell script known as bootstrap-salt.sh runs through a series of checks to determine the operating system type and version. It then installs the Salt binaries using the appropriate methods. The bootscript installs both the Master as well as the Minion binaries.

Following that, the salt-key for the Minion is created and the key is added to the Master. From this point onward, the Master itself is also a Minion and is therefore self-managed.

Salt Configuration

After the Salt Master is installed, the Salt configuration (grains, pillars, etc) can be done directly on the Master and the high-state executed directly on that server. However, in order to quickly replicate the complete infrastructure and system to another virtual data center (thereby implementing a disaster recovery procedure), I decided to host all Salt files on my local machine (at salt/srv/salt/* ) and deploy the files whenever necessary. That means the Salt Master always contains the actual state of all systems, whereas my local machine contains the development state. Deploying all Salt files to the Salt Master is again performed using Terraform.

 //////////////////////////////////////////
 // Salt Config
 //////////////////////////////////////////
resource "null_resource" "salt" {
  depends_on = ["null_resource.saltmaster_config"]

  connection {
    private_key         = "${file(var.ssh_private_key)}"
    host                = "${profitbricks_server.saltmaster.primary_ip}"
    bastion_host        = "${profitbricks_server.bastion.primary_ip}"
    bastion_user        = "root"
    bastion_private_key = "${file(var.ssh_private_key)}"
   timeout             = "4m"
  }

  provisioner "local-exec" {
    command = "tar cvfz salt/srv_salt.tar.gz -C salt/srv/salt ."
  }

  provisioner "file" {
    source      = "salt/srv_salt.tar.gz"
    destination = "/srv/srv_salt.tar.gz"
  }

  provisioner "remote-exec" {
    inline = [
      "mkdir /srv/salt_bak",
      "cp -R /srv/salt/* /srv/salt_bak || echo 'cannot perform backup'",
      "rm -r /srv/salt/*",
      "tar oxvfz /srv/srv_salt.tar.gz -C /srv/salt/",
     "salt '*' state.highstate",
    ]
  }
}

The local-exec provisioner creates a tarball from my local Salt path, which is then uploaded to the Salt Master via the “file” provisioner. Next, a backup of the current config is created on the Salt Master and the tarball is extracted. Finally the Salt highstate is called, thereby applying the Salt config to all Minions.

This resource should run whenever a config change on any server is required. It can be accomplished by tainting the Salt resource (terraform taint null_resource.salt), followed by a terraform apply.

Web Server

Once the Salt Master is fully provisioned and configured, additional servers can be added to the virtual data center anytime. The process is always the same:

  1. Add and provision the server via Terraform
  2. Register Salt Minions on Master
  3. Update hosts file
  4. Optionally: run Salt highstate in order to configure server and install all needed software packages

Example:

//////////////////////////////////////////
// Webserver
//////////////////////////////////////////

resource "profitbricks_server" "web" {
 depends_on        = ["null_resource.saltmaster_config","null_resource.salt"]
 count             = 2
 name              = "${format("web-%02d", count.index +1)}"
 datacenter_id     = "${profitbricks_datacenter.dev-01.id}"
 cores             = 1
 ram               = 1024
 availability_zone = "AUTO"
 cpu_family        = "AMD_OPTERON"
 licence_type      = "LINUX"

 volume = [
   {
     name           = "${format("web-%02d", count.index +1)}-system"
     image_name     = "centos:latest"
     size           = 20
     disk_type      = "HDD"
     ssh_key_path   = ["${var.ssh_public_keys}"]
     image_password = "${var.console_password}"
     licence_type   = "LINUX"
   },
 ]

 nic = [
   {
     lan  = "${profitbricks_lan.mgm_lan.id}"
     dhcp = true
     name = "mgmNIC"
   },
 ]

 connection {
   private_key         = "${file(var.ssh_private_key)}"
   bastion_host        = "${profitbricks_server.bastion.primary_ip}"
   bastion_user        = "root"
   bastion_private_key = "${file(var.ssh_private_key)}"
   timeout             = "4m"
 }
}

///////////////////////////////////////////////////////////
// DMZ NIC Webserver
///////////////////////////////////////////////////////////

resource "profitbricks_nic" "web_dmz_nic" {
 count         = "${profitbricks_server.web.count}"
 datacenter_id = "${profitbricks_datacenter.dev-01.id}"
 server_id     = "${element(profitbricks_server.web.*.id, count.index)}"
 lan           = "${profitbricks_lan.dmz_lan.id}"
 name          = "dmzNIC"
 dhcp          = true
}

///////////////////////////////////////////////////////////
// Webserver config
///////////////////////////////////////////////////////////
resource "null_resource" "web_config" {
 depends_on = ["profitbricks_nic.web_dmz_nic"]
 count      = "${profitbricks_server.web.count}"

 connection {
   private_key         = "${file(var.ssh_private_key)}"
   host                = "${element(profitbricks_server.web.*.primary_ip, count.index)}"
   bastion_host        = "${profitbricks_server.bastion.primary_ip}"
   bastion_user        = "root"
   bastion_private_key = "${file(var.ssh_private_key)}"
   timeout             = "4m"
 }

 # copy etc/hosts file to web server
 provisioner "file" {
   source      = "salt/srv/salt/common/hosts"
   destination = "/etc/hosts"
 }

 # make the magic happen on web server
 provisioner "remote-exec" {
   inline = [

     "echo ${format("web-%02d", count.index +1)} > /etc/hostname",
     "hostnamectl set-hostname ${format("web-%02d", count.index +1)} --static",
     "hostnamectl set-hostname ${format("web-%02d", count.index +1)} --pretty",
     "hostnamectl set-hostname ${format("web-%02d", count.index +1)} --transient",
     "ip route add default via ${profitbricks_nic.fw-01_dmz_nic.ips.0}",
     "echo 'supersede routers ${profitbricks_nic.fw-01_dmz_nic.ips.0};' >> /etc/dhcp/dhclient.conf",
     "echo nameserver 8.8.8.8 > /etc/resolv.conf",

     "echo -e  'y\n'| ssh-keygen -b 2048 -t rsa -P '' -f /root/.ssh/id_rsa -q",

     "wget -O /tmp/bootstrap-salt.sh https://bootstrap.saltstack.com",
     "sh /tmp/bootstrap-salt.sh -L -X -A ${profitbricks_server.saltmaster.primary_ip}",
     "echo '${format("web-%02d", count.index +1)}' > /etc/salt/minion_id",
     "systemctl restart salt-minion",
     "systemctl enable salt-minion",
   ]
 }
 # Accept minion key on master
 provisioner "remote-exec" {
   inline = [
     "salt-key -y -a ${element(profitbricks_server.web.*.name, count.index)}",
   ]

   connection {
     private_key         = "${file(var.ssh_private_key)}"
     host                = "${profitbricks_server.saltmaster.primary_ip}"
     bastion_host        = "${profitbricks_server.bastion.primary_ip}"
     bastion_user        = "root"
     bastion_private_key = "${file(var.ssh_private_key)}"
     timeout             = "4m"
   }
 }
 # Add or update web server host name to local hosts file
 provisioner "local-exec" {
   command = "grep -q '${element(profitbricks_server.web.*.name, count.index)}' salt/srv/salt/common/hosts && sed -i '' 's/^${element(profitbricks_server.web.*.primary_ip, count.index)}.*/${element(profitbricks_server.web.*.primary_ip, count.index)} ${element(profitbricks_server.web.*.name, count.index)}/' salt/srv/salt/common/hosts || echo '${element(profitbricks_server.web.*.primary_ip, count.index)} ${element(profitbricks_server.web.*.name, count.index)}' >> salt/srv/salt/common/hosts"
 }
 # delete minion key on master when destroying
 provisioner "remote-exec" {
   when = "destroy"

   inline = [
     "salt-key -y -d '${element(profitbricks_server.web.*.name, count.index)}*'",
   ]

   connection {
     private_key         = "${file(var.ssh_private_key)}"
     host                = "${profitbricks_server.saltmaster.primary_ip}"
     bastion_host        = "${profitbricks_server.bastion.primary_ip}"
     bastion_user        = "root"
     bastion_private_key = "${file(var.ssh_private_key)}"
     timeout             = "4m"
   }
 }

 # delete host from local hosts file when destroying
 provisioner "local-exec" {
   when    = "destroy"
   command = "sed -i '' '/${element(profitbricks_server.web.*.name, count.index)}/d' salt/srv/salt/common/hosts"
 }
}

Multiple identical systems can be provisioned by using the “count” parameter. As Terraform uses a declarative approach, increasing or decreasing the count parameter will provision, respectively destroy systems.

For the first time the “destroy” mode of a provisioner is used. This provisioner is executed when the machine is destroyed. In this example, when a web server is destroyed:

  1. the Salt key is removed on the Master
  2. the host name is removed from the hosts file

This guarantees that the Salt Master always has an up-to-date view about all active Minions and also all /etc/hosts files on all systems are kept up-to-date. Remember to taint and apply the Salt resource, thereby reapplying the Salt highstate.

Database Server

Finally, all database servers are installed.

//////////////////////////////////////////
// Database
//////////////////////////////////////////

resource "profitbricks_server" "db" {
 depends_on        = ["null_resource.saltmaster_config","null_resource.salt"]
 count             = 2
 name              = "${format("db-%02d", count.index +1)}"
 datacenter_id     = "${profitbricks_datacenter.dev-01.id}"
 cores             = 1
 ram               = 1024
 availability_zone = "AUTO"
 cpu_family        = "AMD_OPTERON"
 licence_type      = "LINUX"

 volume = [
   {
     name           = "${format("db-%02d", count.index +1)}-system"
     image_name     = "centos:latest"
     size           = 20
     disk_type      = "HDD"
     ssh_key_path   = ["${var.ssh_public_keys}"]
     image_password = "${var.console_password}"
     licence_type   = "LINUX"
   },
 ]

 nic = [
   {
     lan  = "${profitbricks_lan.mgm_lan.id}"
     dhcp = true
     name = "mgmNIC"
   },
 ]

 connection {
   private_key         = "${file(var.ssh_private_key)}"
   bastion_host        = "${profitbricks_server.bastion.primary_ip}"
   bastion_user        = "root"
   bastion_private_key = "${file(var.ssh_private_key)}"
   timeout             = "4m"
 }
}

///////////////////////////////////////////////////////////
// DMZ NIC Database
///////////////////////////////////////////////////////////

resource "profitbricks_nic" "db_data_nic" {
 count         = "${profitbricks_server.db.count}"
 datacenter_id = "${profitbricks_datacenter.dev-01.id}"
 server_id     = "${element(profitbricks_server.db.*.id, count.index)}"
 lan           = "${profitbricks_lan.data_lan.id}"
 name          = "dataNIC"
 dhcp          = true
}

///////////////////////////////////////////////////////////
// Database config
///////////////////////////////////////////////////////////
resource "null_resource" "db_config" {
 depends_on = ["profitbricks_nic.db_data_nic"]
 count      = "${profitbricks_server.db.count}"


 connection {
   private_key         = "${file(var.ssh_private_key)}"
   host                = "${element(profitbricks_server.db.*.primary_ip, count.index)}"
   bastion_host        = "${profitbricks_server.bastion.primary_ip}"
   bastion_user        = "root"
   bastion_private_key = "${file(var.ssh_private_key)}"
   timeout             = "4m"
 }

 # copy etc/hosts file to database server
 provisioner "file" {
   source      = "salt/srv/salt/common/hosts"
   destination = "/etc/hosts"
 }

 # make the magic happen on database server
 provisioner "remote-exec" {
   inline = [
     "echo ${format("db-%02d", count.index +1)} > /etc/hostname",
     "hostnamectl set-hostname ${format("db-%02d", count.index +1)} --static",
     "hostnamectl set-hostname ${format("db-%02d", count.index +1)} --pretty",
     "hostnamectl set-hostname ${format("db-%02d", count.index +1)} --transient",
     "ip route add default via ${profitbricks_nic.fw-01_data_nic.ips.0}",
     "echo 'supersede routers ${profitbricks_nic.fw-01_data_nic.ips.0};' >> /etc/dhcp/dhclient.conf",
     "echo nameserver 8.8.8.8 > /etc/resolv.conf",

     "echo -e  'y\n'| ssh-keygen -b 2048 -t rsa -P '' -f /root/.ssh/id_rsa -q",

     "wget -O /tmp/bootstrap-salt.sh https://bootstrap.saltstack.com",
     "sh /tmp/bootstrap-salt.sh -L -X -A ${profitbricks_server.saltmaster.primary_ip}",
     "echo '${format("db-%02d", count.index +1)}' > /etc/salt/minion_id",
     "systemctl restart salt-minion",
     "systemctl enable salt-minion",
   ]
 }
 # Accept minion key on master
 provisioner "remote-exec" {
   inline = [
     "salt-key -y -a ${element(profitbricks_server.db.*.name, count.index)}",
   ]

   connection {
     private_key         = "${file(var.ssh_private_key)}"
     host                = "${profitbricks_server.saltmaster.primary_ip}"
     bastion_host        = "${profitbricks_server.bastion.primary_ip}"
     bastion_user        = "root"
     bastion_private_key = "${file(var.ssh_private_key)}"
     timeout             = "4m"
   }
 }
 # Add or update database hostname to local hosts file
 provisioner "local-exec" {
   command = "grep -q '${element(profitbricks_server.db.*.name, count.index)}' salt/srv/salt/common/hosts && sed -i '' 's/^${element(profitbricks_server.db.*.primary_ip, count.index)}.*/${element(profitbricks_server.db.*.primary_ip, count.index)} ${element(profitbricks_server.db.*.name, count.index)}/' salt/srv/salt/common/hosts || echo '${element(profitbricks_server.db.*.primary_ip, count.index)} ${element(profitbricks_server.db.*.name, count.index)}' >> salt/srv/salt/common/hosts"
 }
 # delete minion key on master when destroying
 provisioner "remote-exec" {
   when = "destroy"

   inline = [
     "salt-key -y -d '${element(profitbricks_server.db.*.name, count.index)}*'",
   ]

   connection {
     private_key         = "${file(var.ssh_private_key)}"
     host                = "${profitbricks_server.saltmaster.primary_ip}"
     bastion_host        = "${profitbricks_server.bastion.primary_ip}"
     bastion_user        = "root"
     bastion_private_key = "${file(var.ssh_private_key)}"
     timeout             = "4m"
   }
 }

 # delete host from local hosts file when destroying
 provisioner "local-exec" {
   when    = "destroy"
   command = "sed -i '' '/${element(profitbricks_server.db.*.name, count.index)}/d' salt/srv/salt/common/hosts"
 }
}

This is basically the same Terraform config as already used by the web servers, except that the databases are deployed into a separate network.

This describes all Terraform configuration files, thereby allowing the provisioning with the command terraform apply.

Configure SaltStack files

In the previous section, the two main SaltStack components, Salt Master and all Minions were installed and configured inside the virtual data center. The Salt configuration is kept on my local machine and pushed on demand to the Salt Master.

It is not in the scope of this article to explain Salt or to go into details regarding the configuration of the Salt State files (*.sls). For any questions, feel free to leave a comment below the article and I will get back to you asap.

For reference, following are most of the used Salt State files.

Top file

Most infrastructures are made up of groups of machines, each machine in the group performing a role similar to others. Those groups of machines work in concert with each other to create an application stack.

To effectively manage those groups of machines, an administrator needs to be able to create roles for those groups. For example, a group of machines that serve front-end web traffic might have roles which indicate that those machines should all have the Apache web server package installed and that the Apache service should always be running. In Salt, the file which contains a mapping between groups of machines on a network and the configuration roles that should be applied to them is called a top file.

base:
  '*':
    - common.vim
    - common.ntp
    - common.saltminion
    - common.hosts
    - common.sshkeys
    - common.firewall

  'master*':
    - saltmaster

  'web*':
    - webserver.webserver

  'db*':
    - database.postgresql

Common files

Vim

{% set pkg = 'vim-enhanced' if grains['os_family'] == 'RedHat' else 'vim' %}

vim:
  pkg:
    - name: {{ pkg }}
    - installed

NTP

{% set name = 'ntpd' if grains['os_family'] == 'RedHat' else 'ntp' %}

ntp:
  pkg:
    - installed

  service.running:
    - name: {{ name }}
    - enable: True
    - reload: True
    - require:
      - pkg: ntp

Salt Minion

salt-minion:
  pkg:
    - installed

  service.running:
    - name: salt-minion
    - enable: True
    - require:
      - pkg: salt-minion

Hosts file

/etc/hosts:
  file.managed:
    - source: salt://common/hosts
    - skip_verify: true
    - user: root
    - group: root

SSH keys

sshkeys:
  ssh_auth.present:
    - user: root
    - enc: ssh-rsa
    - source: salt://common/sshkeys/id_rsa.pub

Firewall

firewall:
  pkg:
    - name: firewalld 
    - installed

  service.running:
    - name: firewalld
    - enable: True
    - require:
      - pkg: firewall

  firewalld.present:
    - name: public
    - ports:
      - 22/tcp

Salt Master

salt-master:
  pkg:
    - installed

  service.running:
    - name: salt-master
    - enable: True
    - require:
      - pkg: salt-master

  firewalld.present:
    - name: public
    - ports:
      - 4505/tcp
      - 4506/tcp
    - require:
      - pkg: firewall

Web Server

{% set  pkg= 'httpd' if grains['os_family'] == 'RedHat' else 'apache2' %}

webserver:
  pkg:
    - name: {{ pkg }} 
    - installed

  service.running:
    - name: {{ pkg }}
    - enable: True
    - require:
      - pkg: webserver

  file.managed:
    - name: /var/www/html/index.html
    - source: salt://webserver/index.html
    - require:
      - pkg: webserver

  firewalld.present:
    - name: public
    - ports:
      - 80/tcp
      - 443/tcp
    - require:
      - pkg: firewall

Database Server

postgres:
  pkg:
    - name: postgresql-server
    - installed
  postgres_user.present: 
    - name: Postgresql
    - user: postgres
    - groups: postgres
  postgres_group.present:
    - name: Postgresql

  service.running:
    - name: postgresql
    - enable: True
    - require:
      - pkg: postgres
      - pg-initdb

  firewalld.present:
    - name: public
    - ports:
      - 5432/tcp
    - require:
      - pkg: firewall

pg-initdb:
    cmd.run:
      - name: postgresql-setup initdb
      - unless: ls /var/lib/pgsql/data/base

Conclusion

This showcase demonstrates how a complete virtual data center including a firewall and several web and database servers can be provisioned fully automated using Terraform and brought under configuration management using SaltStack. The build-up of the whole VDC from scratch takes roughly 30 minutes, usually an absolutely acceptable time for disaster recovery of a whole datacenter.

Both tools work perfectly in combination with each other, while we are utilizing the strengths of each tool. The community around the Terraform ProfitBricks plugin is very active; bug fixes and new features are committed regularly to the project and the documentation is complete.

The usage of Infrastructure as Code is definitely recommended for any organization deploying more than a couple servers. Being able to redeploy any server within minutes, rolling back infrastructure and configuration changes to any revision and having a complete disaster recovery procedure in place without additional effort definitely outweigh any initial effort put forth into developing the code.

 
  • Such as scientific analysis proves that it involves all the four lobes of the brain to work cohesively in order to create something meaningful configuration management. In this post knowing very much deep shares academic writing task done easily.

    Cheap Dissertation Writing Services UK

Log In, Add a Comment