Community

define the ip address wit salt-cloud

Hi, is it possible to define the (private) ip with salt-cloud when I create a new server ? Many thanks, Uli

 
  • **bold**
  • _italics_
  • `code`
  • ```code block```
  • # Heading 1
  • ## Heading 2
  • > Quote
 

Hello Uli,

My apologies for the delay in getting back to you. I finally have some news!

You posed the following questions:

  1. Is it possible to define the (private) IP with salt-cloud when I create a new server?

This feature is not available in the current code release. I was in contact with one of the developers that works on the ProfitBricks driver for Salt. He responded that it shouldn't be too difficult to implement. This is a feature request for adding the ability to set a static private ip address.

  1. How can I set the image-password?

There is no currently available option to set an image password, only SSH keys. The ProfitBricks driver for Salt relies on the ProfitBricks Python module. That module supports image passwords, therefore I would expect it could be implemented. If I understand correctly, salt-cloud won't be able to use this password for anything, it relies on SSH keys. However, I can see the convenience of leveraging the ProfitBricks image password functionality to set an initial password rather than using a post-build method to do this.

  1. How can I delete a server and its storage volumes?

Code for this has already been submitted and is currently in the SaltStack develop branch. ( https://github.com/saltstack/salt ) It has NOT been released yet. However, once it is released, then it will be possible to add "delete_volumes: true" to the provider profile. ( for example: /etc/salt/cloud.providers.d/profitbricks.conf ) With that option present, when a server delete request is accepted, it would also remove the storage volume.

As an FYI - the 'develop' branch also contains code to implement the "image_alias" functionality so you could pass in something like "ubuntu:latest" instead of specifying the actual imageId. This will eliminate the need to keep your profiles updated with the current image UUID.

In summary - a fix for one of the questions should be included in the next SaltStack release. The other two will need to have new code submitted. Then the SaltStack developers would need to accept the pull request and include the changes in a future release.

Eric

  • Many thanks for the answer!

hello uli.. <br/> As just mentioned, Salt is provisioning software, just as Chef, Puppet and Ansible are. Salt works with a Salt master which then runs states on minions (clients). These minions run the commands that the state, when run, results in the system being as expected according to the state. The master is cognizant of its minions and stores data used in the provisioning process.

Salt is written in Python and uses YAML templates for both its “states” as well as its “pillars”. States are YAML files that are a representation of a state in which a system should be in. Pillars are tree-like structures defined on the Salt master and passed to the minions. Pillar data is represented in YAML files as well.

In addition to states, it’s possible to run single commands using Salt by way of the Salt master and on all or specific minions.

Salt Cloud, as already mentioned, is the provisioning tool component of Salt. Often, you will see in cloud-based organizations that different teams might use a tool such as Chef or Salt, though they will often script the functionality that orachestrates the launching and/or deletion of instances. For instance, I worked on one team that used Chef and we had a very nice tool written in Ruby that would launch instances that would load cookbooks for and Chef would then provision. This approach can give one a lot of control but also be hard to maintain as well as being very application or cloud-specific.

Salt-cloud has the approach of keeping the provisioning, instance creation and deletion aspect of salt under the same system and by doing so, the same concepts that Salt uses, namely YAML being the means to express states and pillars naturally is also used for Salt Cloud.

When Salt Cloud is used to launch an instance, that instance is not only brought up, but set up as a minion being managed by the master that launched it.

The other good think about Salt Cloud is that it is built on top of Apache LibCloud which makes it possible to use Salt Cloud with any number of cloud providers such as EC2, Google Compute Engine, LXC, Linode, Azure and of course OpenStack providers such as HP Cloud and Rackspace.

Salt Cloud concepts Salt Cloud uses the following concepts that I discovered upon using it:

Cloud Provider: a particular cloud connection including the username, password, any sort of API key, region as well as authentication endpoints. Cloud profile: a particular type of image to use when launching an instance or container. This being, the size/flavor, image ID and the cloud provider to use

Salt Cloud with HP Cloud One of the first things I wanted to try to get working properly was the ability to launch instances with HP Cloud. This proved difficult at first and the documentation was dated for the initial version of HP Cloud (1.0, based off of Essex and using Nova Networking) whereby when launching instances, external IP addresses were automatically assigned to an instance, whereas with the newer HP Cloud (1.1, based off of Havana and using Neutron), one has to first create a floating IP and then add it to a specific instance.

Why is this a problem? Well, because when the command salt-cloud is run, it needs to not only be able to launch an instance but also connect to it through SSH and install the minion to be able to talk to the master. Without an IP to connect to, this won’t happen, and the examples that were in the documentation did not work with HP Cloud 1.1.

Also do note, this post is to introduce the reader to concepts. For more in-depth reading on the matter, it is strongly advised to review the Salt Cloud docs, which the author recently contributed to for easier Salt Cloud usage with OpenStack and HP Cloud in particular!

Setting up a cloud provider As previously stated, a cloud provider is exactly what the term means. One can have numerous cloud providers to chose from in any given setup, including different vendors that use different cloud technologies. These cloud providers have YAML-based configuration files typically located in /etc/salt/cloud.profiles.d

Example:

$:/etc/salt/cloud.providers.d$ ls hpcs_ae1.conf openstack_grizzly.conf hpcs_aw2.conf rackspace_dfw.conf rackspace_ord.conf And example of one would be:

hpcs_ae1: minion: master: mymaster.domain

identity_url: https://region-b.geo-1.identity.hpcloudsvc.com:35357/v2.0/tokens compute_name: Compute ignore_cidr: 10.0.0.1/24 networks: - floating: - Ext-Net - fixed: - my-network protocol: ipv4 compute_region: region-b.geo-1 user: clouduser tenant: clouduser-project1 password : xxxx ssh_key_name: test ssh_key_file: /root/keys/test.key provider: openstack In the above example, their are various connection metadata that is used to establish a connection to HP Cloud. This would be the same for any cloud provider and would vary according to the particular attributes of how one connects to a cloud. For instance, with Rackspace, one would also provide an apikey parameter. For ec2, provider would be ec2 (obviously).

The networks setting provides one or more named networks that you provide. You will need to know these in advance if you intend to use them. In the example above, they are Ext-Net (default network with HP Cloud) and a network the user created my-network. These are networks that the salt-cloud OpenStack driver uses to obtain a list of floating IP addresses, picking one to attach to the instance being provisioned in order to be able to set up the instance as a minion.

Interestingly, this is where I recently modified the driver and contributed to salt-cloud code that would negate the need for specifying networks and instead simply obtain floating IPs if not specified regardless of network.

The ignore_cidr setting is for listing a range of IP addresses to not even bother using as an address to connect to for setting up the minion. In this case, even though salt-cloud will not attempt to use the private IP address for the instance, using ignore_cidr will making it so salt-cloud doesn’t bother even identifying that the IP address is private.

NOTE: If one wants to use the private IP address to connect to the instance– for instance, if the master is in the same network in the cloud as the minions (actually commonplace) then one need only have the following in their cloud provider configuration file:

ssh_interface: private_ips This will result in salt-cloud using the private IP address when using SSH to connect to the instance during setup of the minion.

So, it is clearly a simple setup and clean YAML representation per the philosophy of Salt.

Testing the cloud profile First of all, it is assumed that the reader of this post has already set up a python virtual environment. Other installation steps and issues include:

Installing Salt within the vurtual environment One ought to set up the master (see Salt Documentation) You must run salt-cloud as root. This is a bit of a conundrum because the documentation clearly states that you ought to run a virtualenv setup for development yet you need to run it as root. One way to do this is to set up the virtualenv as suggested as a regular user and run salt-cloud via sudo after activiting the virtual environment. The other way is to set up a virtualenv as root and install salt as:

pip install -e /home/regularuser/salt

Listing cloud providers The first command to run to verify you have set up cloud provider configuration files properly is to list the cloud providers:

salt-cloud --list-providers

[INFO ] salt-cloud starting hpcs_ae1: ---------- openstack: ---------- hpcs_aw2: ---------- openstack: ---------- hpcs_az1_aw2_patg: ---------- openstack: ---------- openstack_grizzly: ---------- openstack: ---------- rackspace_ord: ---------- rackspace: ---------- rackspace_dfw: ---------- rackspace: ---------- This output shows a variety of providers between HP Cloud and Rackspace, and several regions with each. At this point listing images is the next step.

Listing images within a provider The next verification to be done would be to test one of the providers. A simple test would be to list images for a specific provider that can be used to launch an instance that is defined in a cloud profile (next topic).

The command is run by specifying the cloud provider, in this case, hpcs_ae1:

salt-cloud --list-images hpcs_ae1

[INFO ] salt-cloud starting hpcs_ae1: ---------- openstack: ---------- <snip> ... numerous images ... Ubuntu Raring 13.04 Server 64-bit 20130601: ---------- driver: extra: ---------- created: 2013-07-03T13:40:56Z metadata: ---------- architecture: x86_64 com.hp1bootable_volume: true com.hp1image_lifecycle: active com.hp1image_type: disk com.hp1os_distro: com.ubuntu os_type: linux-ext4 os_version: 13.04 minDisk: 10 minRam: 0 progress: 100 serverId: None status: ACTIVE updated: 2014-03-21T11:42:56Z get_uuid: id: 9302692b-b787-4b52-a3a6-daebb79cb498 name: Ubuntu Raring 13.04 Server 64-bit 20130601 uuid: 4e6b04dc25dad797e345395c445182fa2c697050 As you can see, with only one image there is a lot of information and the above was from an output of numerous images. If you have an output, then obviously the cloud provider configuration file is correct.

Setting up a cloud profile A cloud profile is yet another YAML file that represents how a particular instance is launched: what size/flavor it is, the cloud provider, image id, ssh key and name to use, etc. These files are typically located in /etc/salt/cloud.profiles.d:

$:/etc/salt/cloud.profiles.d$ ls hpcs.conf devstack_cirros.conf rackspace_quantal.conf devstack_ubuntu.conf hpcs_raring.conf rackspace_raring.conf rackspace_precise.conf testwiki.conf And example of one of these files would be:

hpcs_raring:
    provider: hpcs_ae1
    image: 9302692b-b787-4b52-a3a6-daebb79cb498
    size: standard.small
    ssh_key_file: /root/keys/test.key
    ssh_key_name: test
    ssh_username: ubuntu

In this exemple, the provider used is for HP Cloud AE1 region, small flavor, Ubuntu raring image id, and for SSH a private key location is provided that will be used on the minion being set up.

The settings in this file are as follows:

provider - this is the cloud provider that will be used to connect and run the instance in size - this is the size, or in the case of OpenStack, the flavor to be used ssh_username - this is the username that will be used to log into the instance to set up the minion. Since this is an Ubuntu Raring image, ubuntu is the user. With other images this will be different. There are also some setttings in the above such as ssh_key_name, and ssh_key_file that override whatever the clodu provider this uses Now that this cloud profile is defined, it can be used.

Launching a cloud profile To launch a cloud profile is quite simple:

salt-cloud -p hpcs_ubuntu_raring myinstance [INFO ] salt-cloud starting [INFO ] Creating Cloud VM myinstance [INFO ] Attaching floating IP '15.126.223.157' to node 'myinstance' [WARNING ] Private IPs returned, but not public... Checking for misidentified IPs [WARNING ] 10.0.0.27 is a private IP [WARNING ] IP '10.0.0.27' found within '10.0.0.1/24'; ignoring it. [INFO ] Rendering deploy script: /home/patg/code/salt/salt/cloud/deploy/bootstrap-salt.sh Warning: Permanently added '15.126.223.157' (ECDSA) to the list of known hosts. <snip> ... similar output Warning: Permanently added '15.126.223.157' (ECDSA) to the list of known hosts. sudo: unable to resolve host myinstance * INFO: /bin/sh /tmp/.saltcloud/deploy.sh -- Version 2014.02.27

  • INFO: System Information:
  • INFO: CPU: GenuineIntel
  • INFO: CPU Arch: x86_64
  • INFO: OS Name: Linux
  • INFO: OS Version: 3.8.0-23-generic
  • INFO: Distribution: Ubuntu 13.04

  • INFO: Installing minion

  • INFO: Found function install_ubuntu_deps
  • INFO: Found function config_salt
  • INFO: Found function install_ubuntu_stable
  • INFO: Found function install_ubuntu_restart_daemons
  • INFO: Found function daemons_running
  • INFO: Found function install_ubuntu_check_services
  • INFO: Running install_ubuntu_deps() Hit http://az2.clouds.archive.ubuntu.com raring Release.gpg Get:1 http://security.ubuntu.com raring-security Release.gpg [933 B] Get:2 http://az2.clouds.archive.ubuntu.com raring-updates Release.gpg [933 B] Get:3 http://security.ubuntu.com raring-security Release [40.8 kB] <snip> ... more output ssing triggers for ureadahead ... Setting up salt-minion (0.17.5-1raring1) ...

Configuration file `/etc/salt/minion' ==> File on system created by you or by a script. ==> File also in package provided by package maintainer. ==> Using current old file as you requested. salt-minion start/running, process 3760 Setting up debconf-utils (1.5.49ubuntu1) ... Processing triggers for ureadahead ... * INFO: Running install_ubuntu_check_services() * INFO: Running install_ubuntu_restart_daemons() salt-minion stop/waiting salt-minion start/running, process 3791 [INFO ] [Salt][saltstack] installed on myinstance [INFO ] Created Cloud VM 'myinstance' myinstance: ---------- _uuid: None driver: extra: ---------- access_ip:

    created:
        2014-03-27T15:15:54Z
    disk_config:
        None
    flavorId:
        101
    hostId:

    imageId:
        9302692b-b787-4b52-a3a6-daebb79cb498
    key_name:
        test
    metadata:
        ----------
        profile:
            hpcs_1_1_ubuntu_raring
    tenantId:
        10966558279755
    updated:
        2014-03-27T15:15:55Z
    uri:
        http://region-b.geo-1.compute.hpcloudsvc.com/v2/10966558279755/servers/d461e8e7-aae1-47be-b313-21c9c302565f
id:
    d461e8e7-aae1-47be-b313-21c9c302565f
image:
    None
name:
    myinstance
private_ips:
public_ips:
    - 15.126.223.157
size:
    None
state:
    3

At this point, you now have a newly launched minion and can start provisioning it via salt!

Delete instance

  • hope this will help you .......

Hi Uil,

I've asked someone to look into this and should have some additional information next week.

Out of curiosity, are you using salt and terraform together, or is this for a different project?

Eric

  • Hi Eric, is there any findings os this? Many thanks, Uli

Hi Eric, thanks for your answer. I wanna give salt a try, because currently I´m not too lucky with terraform. There is some strange behavior in my current DCs with terraform. I´ve used saltstack for configuration some time before, so - salt-cloud could become my solution.<br/> But - some questions about salt came up in the meantime.

How can I set the image-password?<br/> If I delete a server, the storage remains. How can I delete a server with all storages? and - as you know: How to set the ip-addresses? Best regards, Uli

Hello Uli,

Just wanted to let you know that pull requests containing the new features we discussed have been accepted into the develop branch in the GitHub Salt repo. If you clone or install from the develop branch, you should be able to use them. I'm not clear what their release schedule is and when those features will show up in the official release, but they should appear eventually.

Eric