Tutorials

Use the ProfitBricks Cloud API with Python Part 3

ProfitBricks not only offers an easy-to-use GUI – the Data Center Designer (DCD) – we also provide an API that is as capable as the DCD – the Cloud API (formerly named REST API). Via this API, you can easily automate tasks such as the resource management or the provisioning of entire data centers.

Part 1 of this series explained how to retrieve information about a virtual data center and its components. Part 2 explained how to create a simple data center and manage the VM therein.

This article combines parts of its predecessors and shows how to

  • create complex data centers from scratch

  • clone an existing data center using snapshots

As for the former parts this article is accompanied by sample scripts which are available in ProfitBricks' github repository

Create a complex data center

In part 2 we have seen how to create a data center step-by-step and all-at-once. However, both approaches were completely implemented in the source code which is not very flexible. We can avoid this shortcoming by using an input file which contains the data center definition and a more generalized script to execute the API calls in the right order.

Input file

The natural choice for the file format is to use the structured JSON format that is already used by the Cloud API. A minimal data center definition file would then look like this:

{
  "properties": {
    "location": "de/fkb",
    "description": "Test environment",
    "name": "API-Test_2016-09-01T11:12:09.579955"
  },
}

This file contains all information that is needed for creating an empty data center. A more complex data center definition with two servers, a public and a private LAN is shown here:

{
  "properties": {
    "location": "de/fkb", "name": "API-Demo_Blog3"
  },
  "entities": {
    "lans": {
      "items": [
        {
          "properties": { "public": "true", "name": "public Lan 1" }
        },
        {
          "properties": { "public": "false", "name": "private Lan 2" }
        }
      ]
    },
    "servers": {
      "items": [
        {
          "properties": { "ram": 4096, "cores": 2, "name": "Firewall" },
          "entities": {
            "volumes": {
              "items": [
                {
                  "properties": {
                    "type": "HDD", "image": "d1f418b7-6ff3-11e6-bfbf-52540005ab80",
                    "size": 4, "bus": "VIRTIO", "name": "Firewall boot",
                    "imagePassword": "1q2w3e4r5tXXX"
                  }
                }
              ]
            },
            "nics": {
              "items": [
                {
                  "properties": { "lan": 1, "dhcp": "true", "name": "pu_fw" }
                },
                {
                  "properties": { "lan": 2, "dhcp": "true", "name": "pr_fw" }
                }
              ]
            }
          }
        },
        {
          "properties": { "ram": 4096, "cores": 2, "name": "App1" },
          "entities": {
            "volumes": {
              "items": [
                {
                  "properties": {
                    "type": "HDD", "image": "d1f418b7-6ff3-11e6-bfbf-52540005ab80",
                    "size": 4, "bus": "VIRTIO", "name": "App1 boot",
                    "imagePassword": "1q2w3e4r5tXXX"
                  }
                },
                {
                  "properties": {
                    "type": "HDD", "image": null, "licenseType": "OTHER",
                    "size": 10, "bus": "VIRTIO", "name": "App1 Data",
                    "imagePassword": null
                  }
                }
              ]
            },
            "nics": {
              "items": [
                {
                  "properties": { "lan": 2, "dhcp": "true", "name": "app1_in" }
                }
              ]
            }
          }
        }
      ]
    }
  }
}

The resulting data center API-Demo_Blog3 would look like this:

screenshot-of-created-datacenter

This data center is created with the sample script pb_createDatacenter.py, which can be found on github. In the following we will have a closer look on this script to see how the data center is created.

Data center creation procedure

First of all, we cannot create a data center all at once with the given input file. As we have seen in part 2 of this series this would need a Datacenter object that contains all other resources. Building this object is not really a problem, but as we will see soon, it is better to build the data center in separate steps.

The general procedure is to

  1. create the empty data center,

  2. create the volumes unattached,

  3. create the servers,

  4. create the LANs,

  5. create the NICs for the servers,

  6. attach the volumes to the servers.

Using this approach we can get an additional benefit of a simple checkpoint restart mechanism: After each step a temporary file is written, which can be used as input file too. In these files we add some additional custom entries, where we save the UUIDs of already created resources. For example we add the following custom entry at the top data center level after the empty data center is created:

{
  "properties": {
    "name": "API-Demo_Blog3", "location": "de/fkb"
  },
  "custom": {
    "id": "e5752283-3425-49c8-9eb6-5dfa9bb6bfa7"
  },
  # more data omitted for display #
}

The so saved UUID of the created data center is recognized by the script and then used without the need to create the data center again.

The rationale behind creating the data center in separate steps lies in three characteristics of ProfitBricks’ provisioning process.

The first and most important is, that it is not explicitly guaranteed that the network interfaces are added to a server in the order they are specified if you do it all-at-once. This means you can’t be absolutely sure that your first specified NIC is the first interface in your VM’s operating system (although it usually is). As a consequence, we need to create the NICs explicitly to have full control over the order in which they show up in the operating system.

What is confusing at this point is the fact, that the display of the NICs in DCD shows a totally different order. In fact, it turned out that the NICs seem to be ordered by their MAC address in the graphical representation. This, however, does not change the order in the VM itself. When creating interfaces via DCD there seems to be a mechanism such that the assigned MAC addresses are always ascending.

The second characteristic appeared already in part 2 of this series: If a NIC is created, it will implicitly create the assigned LAN, that does not exist at this point. This results in an additional update of public LANs, since the default is to create a LAN as private. While this might not be a problem for completely new data centers, we should keep in mind that this script should also be used for cloning a ‘living’ data center, which may try to use the network from the start.

The third characteristic is the automatic start of newly created servers. If we create the servers with their disks attached, these will boot normally. But since we have to adjust network, this most certainly will result in lots of error messages for cloned servers on the operation system level. To avoid this, we separate the disk creation and attachment from the server creation.

This said, we come back to the creation procedure of pb_createDatacenter.py:

First we create the empty data center and wait for it to finish:

dc = getDatacenterObject(dcdef)
response = pbclient.create_datacenter(dc)
dc_id = response['id']
if 'custom' not in dcdef:
    dcdef['custom'] = dict()
dcdef['custom']['id'] = dc_id
request = pbclient.get_request(response['requestId'])
result = wait_for_request(pbclient, response['requestId'])

The method getDatacenterObject() builds a data center object from the input file’s JSON formatted definition. Similar methods are implemented for other objects. As you can see, we add the UUID of the created data center to our definition. The definition is then stored in a temporary file.

In the second step we loop over all servers and create all the servers’ volumes in the data center, but do not attach them to the server now. Server-less volumes are not supported in the script. The only reason to do this step in the beginning is to have the most time consuming work done first.<br/> In extension of the above listed wait_for_request() method the script also implements a list-based method for multiple requests. Since the volumes are independent at this point, there’s no need to wait on each single request and we can save some time this way.<br/> Again we remember the UUIDs and write a temporary file.

In the third step the servers are created the same way as the volumes are.

Likewise, the fourth step creates all LANs including their correct public or private LAN setting. Note that we can create a LAN without NICs via API which you can’t do in DCD.

In step five we come to the creation of the servers’ NICs, which is performed step-by-step:

  • create a NIC

  • wait for each separate request to finish, so we can get the MAC address

  • print a list of one server's NIC IDs sorted by the MAC addresses for convenience

The sixth and last step is to attach the volumes to the servers. Whilst the servers were in a unsuccessful boot loop before, they can now access their boot disks and start up normally.

The here described script pb_createDatacenter.py has two useful options to overwrite the data center name and the imagepassword in the input file:

('-D', dest='dcname', help='new datacenter name')
('-P', dest='imgpassword', help='the image password')

Note, that Loadbalancers are not supported by the script.

Clone a data center using snapshots

A simple approach to clone an existing data center is to make snapshots of all volumes, read out the resources properties and create a new data center with this information.

The script pb_snapshotDatacenter.py which you can find on github implements this approach. It performs the following steps:

  1. power off all servers

  2. read out the complete data center in depth

  3. create a snapshot of each server’s volumes

  4. replace the volume’s image UUID with the UUID of the snapshot

  5. write the changed data center definition to a file, that can be used by script pb_createDatacenter.py

The script pb_snapshotDatacenter.py makes use of the following arguments:

('-d', dest='dc_id', help='datacenter ID of the server')
('-o', dest='outfile', help='the output file name')
('-S', dest='stopalways', help='power off even when VM is running')

Although you can take snapshots of a running system, you should always shut them down to be sure that cached or in-memory data of your applications is written to disk.

So the first step simply reads all the servers of the specified data center and checks if the operation system is shut down. If stopalways is False and there are running systems, the script terminates. Else all systems are powered off:

srv_info = getServerInfo(pbclient, dc_id)
srvon = 0
for server in srv_info:
    if server['vmstate'] != 'SHUTOFF':
        print("VM {} is in state {}, but should be SHUTOFF"
              .format(server['name'], server['vmstate']))
        srvon += 1
# end for(srv_info)
if srvon > 0 and not args.stopalways:
    print("shutdown running OS before trying again")
    return 1
# now power off all VMs before starting the snapshots
for server in srv_info:
    controlServerState(pbclient, dc_id, server['id'],action='POWEROFF')

The second step is then to get the complete data center in depth by method get_datacenter(dc_id, 5) and write this in structured JSON format to a file. The file name is the specified output file with the addition of '_source.json'. With this file you have a complete dump of your source data center as reference.

In the third step we create a snapshot of each volume that is attached to a server using create_snapshot(). Unattached volumes are ignored in this script. As snapshot name we use the UUID of the volume, so you can better identify the volume that is contained in the snapshot. Additionally, the UUID of the snapshot is saved in the dictionary vol_snapshots where the keys are the volume UUIDs.

In contrast to many other create methods, create_snapshot() does not return a request ID. So we have to check, if the snapshot’s state is ‘AVAILABLE’:

snapshot = pbclient.get_snapshot(snap_id)
if snapshot['metadata']['state'] == 'AVAILABLE':
    # snapshot is done
    snapdone[snap_id] = snapshot['metadata']['state']

If all snapshots are done, we loop again over all volumes. With the information saved in vol_snapshots we can now replace the image UUID of the volume with the snapshot UUID in our data center definition.

In addition to the image UUID some more changes must be made to get an exact clone. The data center definition contains lists for LANs and for NICs and volumes that are attached to a server. But these lists are not sorted the way they appear in DCD. Thus we have to make sure, that we put LANs, NICs and volumes in order to get the correct clone later on.

For LANs that is easy, because they have an ID:

lans = dcdef['entities']['lans']['items']
new_order = sorted(lans, key=lambda lan: lan['id'])
dcdef['entities']['lans']['items'] = new_order

For volumes it is nearly the same because they have a deviceNumber:

volumes = server['entities']['volumes']['items']
new_order = sorted(volumes, key=lambda vol:
                   vol['properties']['deviceNumber'])
server['entities']['volumes']['items'] = new_order

For NICs it appears to be more complicated because they have no well defined criterion. But as we have seen before, they are ordered by MAC address in DCD which we can use instead:

nics = server['entities']['nics']['items']
new_order = sorted(nics, key=lambda nic: nic['properties']['mac'])
server['entities']['nics']['items'] = new_order

The changed definition is now saved to the specified output file with extension '.json'.

You can now check if the saved data center definition is as required and even change some values.

The last step of cloning is then performed by pb_createDatacenter.py with newly created file as input.

Caveats

  • This simple data center cloning approach using snapshot does only work if the clone should reside in the same location.

  • Sorting the NICs by their MAC address may not reflect the operation system's ordering.

  • All NICs that use DHCP will get new IP addresses.

  • Especially the public IPs will change.

  • The created snapshots will be billed. If you are finished with cloning you should remove the snapshots.

List of references

 
  • I must be like to visit here this blog and have to seen clear bing history setting of the windows computer this is the use full to delete the all creche in browser working fast.

Log In, Add a Comment