terraform server broken with new ubuntu-image

Hi, I cannot log into newly with terraform created servers if I use the curren ubuntu image. (ubuntu-16.04-LTS-server-2018-01-15) After the server is created, the key-files in /etc/ssh are zero-byte. This is new, since I´m using the same scripts as with many successfully created servers. This does not happen, if I create the server directly with the designer.

Any Idea? Many thanks, Uli

  • **bold**
  • _italics_
  • `code`
  • ```code block```
  • # Heading 1
  • ## Heading 2
  • > Quote

Hello @ulise - I ran into the same issue myself and found a workaround that might help. I had in rare circumstances encountered the same zero-byte SSH key issue when building servers through the API. It never seemed to occur in the DCD, only through the API. And it was never predictable until I ran into the same problem with Terraform. In fact, I discovered through Terraform that it only seems to occur with the Ubuntu images and not other images like CentOS.

I only encounter the problem with Terraform when using the provisioner inside the server resource block. Are you experiencing the issue while attempting to use the provisioner as well? Or also when not using the provisioner?

In my situation, I was able to workaround the issue by moving the provisioner outside of the server resource block as a null_resource. I can't explain why this solved the problem, but it seemed to work after many tests. Here is an example snippet from my SSH bastion config:

// Build bastion server
resource "profitbricks_server" "bastion" {
  name              = "bastion"
  datacenter_id     = "${}"
  cores             = 1
  ram               = 1024
  cpu_family        = "AMD_OPTERON"
  availability_zone = "ZONE_1"

  volume {
    name              = "system"
    image_name        = "${var.image_alias}"
    size              = 5
    disk_type         = "SSD"
    image_password    = "stackpoint2017"
    availability_zone = "AUTO"
    ssh_key_path      = [

  nic {
    name = "public"
    lan  = "${}"
    ip   = "${profitbricks_ipblock.public_ip.ips.0}"
    dhcp = true

// Connect bastion gateway NIC
resource "profitbricks_nic" "gateway_nic" {
  datacenter_id = "${}"
  server_id     = "${}"
  lan           = "${}"
  name          = "private"
  dhcp          = true

// Provision bastion IP forwarding
resource "null_resource" "bastion_provisioner" {
  depends_on = [ "profitbricks_nic.gateway_nic" ]

  provisioner "remote-exec" {
    inline = [
      "echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf",
      "sysctl -p",
      "firewall-cmd --permanent --direct --passthrough ipv4 -t nat -I POSTROUTING -o eth0 -j MASQUERADE",
      "firewall-cmd --permanent --direct --passthrough ipv4 -I FORWARD -i eth1 -j ACCEPT",
      "firewall-cmd --reload"

    connection {
      private_key = "${file("${var.private_key_path}")}"
      host        = "${profitbricks_server.bastion.primary_ip}"
      user        = "root"

I create the server first, then attach a second NIC, and finally launch the provisioner inside a null_resource block. My provisioner depends on the second NIC, but it's possible it could depend on the server instead.

Perhaps you can try it out and report back whether this works for you?

I would also recommend opening a ticket with ProfitBricks Support about the issue. I reported the issue myself, but they might increase the priority if they know this is affecting other users.

Hi Ethan, thanks for you answer. Yes, from my feeling, the problem occurs only in context with profisioners. Might be an timing effect with the use if the keys in the provisioner while the image is prepared. Thanks for your tip with the null_resource. I´ll give it a try during my next rollout. I thought, this problem depends on the ubuntu-version, but this could point to a timing issue as well. Kind regards, Uli