Tutorials

Deploy a Multi-node Kubernetes Cluster on CentOS 7

Table of Contents

Introduction

Kubernetes is an open source container management system that allows the deployment, orchestration, and scaling of container applications and micro-services across multiple hosts. This tutorial will describe the installation and configuration of a multi-node Kubernetes cluster on CentOS 7.

Understanding the basic Kubernetes concepts and multi-node deployment architecture will make the installation and management much easier. It is suggested that the Kubernetes overview document be reviewed before continuing forward.

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/overview.md

A single master host will manage the cluster and run several core Kubernetes services.

  • API Server - The REST API endpoint for managing most aspects of the Kubernetes cluster.
  • Replication Controller - Ensures the number of specified pod replicas are always running by starting or shutting down pods.
  • Scheduler - Finds a suitable host where new pods will be reside.
  • etcd - A distributed key value store where Kubernetes stores information about itself, pods, services, etc.
  • Flannel - A network overlay that will allow containers to communicate across multiple hosts.

The minion hosts will run the following services to manage containers and their network.

  • Kubelet - Host level pod management; determines the state of pod containers based on the pod manifest received from the Kubernetes master.
  • Proxy - Manages the container network (IP addresses and ports) based on the network service manifests received from the Kubernetes master.
  • Docker - An API and framework built around Linux Containers (LXC) that allows for the easy management of containers and their images.
  • Flannel - A network overlay that will allow containers to communicate across multiple hosts.

Note: Flannel, or another network overlay service, is required to run on the minions when there is more than one minion host. This allows the containers which are typically on their own internal subnet to communicate across multiple hosts. As the Kubernetes master is not typically running containers, the Flannel service is not required to run on the master.

Requirements

  • CentOS 7 (possibly Red Hat Enterprise Linux 7)
  • Kubernetes 0.15.0
  • Docker 1.5.0

Note: The versions specified were the latest available while drafting this tutorial. The instructions may work with later versions, but the configuration could vary as Kubernetes is rapidly evolving.

  • Three virtual servers:
    • Kubernetes master
      • Hostname: kube-master
      • Private IP: 10.11.50.10
    • Kubernetes minion #1
      • Hostname: kube-minion1
      • Private IP: 10.11.50.11
      • Public IP: 203.0.113.110
    • Kubernetes minion #2
      • Hostname: kube-minion2
      • Private IP: 10.11.50.12
      • Public IP: 203.0.113.111

Note: The public IP addresses are completely optional and depend on whether the pods will be exposed publicly.

Here is a screenshot of the infrastructure layout used in this tutorial:

Configure All Kubernetes Hosts

The following Configure Kubernetes Hosts steps should be performed on all of the hosts including the master and minion hosts.

Add the Package Repository

Kubernetes is currently under active development and there are frequent changes to the code base. The latest binaries can be compiled from source, but the CentOS Community Build System usually has current binaries and simplifies the install onto Enterprise Linux distributions.

The Community Build System YUM repository, virt7-testing, will need to be added to all Kubernetes hosts.

cat << EOF > /etc/yum.repos.d/virt7-testing.repo
[virt7-testing]
name=virt7-testing
baseurl=http://cbs.centos.org/repos/virt7-testing/x86_64/os/
gpgcheck=0
EOF

Install Required Packages

Now the Kubernetes package and dependencies can be installed. Again, these packages should be installed on all hosts including the master and minion hosts.

yum -y install docker docker-logrotate kubernetes etcd flanneld

Setup Hostname Resolution

Using hostname resolution will help clarify the relationship between all the hosts. Add the following mapping to the /etc/hosts file to allow proper DNS resolution across all hosts.

10.11.50.10 kube-master
10.11.50.11 kube-minion1
10.11.50.12 kube-minion2

Common Kubernetes Configuration

Edit the /etc/kubernetes/config file and set the KUBE_MASTER value to the API server URL which will ultimately reside on the master host. The config file should look similar to the following:

# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow_privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://kube-master:8080"

Configure the Flannel Service

Edit the /etc/sysconfig/flanneld and specify the etcd URL and key location. Here is an example of the flanneld configuration file:

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD="http://kube-master:4001"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/flannel/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

Configure the Master

The following Configure the Master setup will take place on the master host only.

Configure the API Server

The API server configuration file will handle the API service binding, specify the location of the etcd service, and define the container IP address range. Edit the /etc/kubernetes/apiserver to match the following example:

# The address on the local server to listen to.
KUBE_API_ADDRESS="--bind-address=0.0.0.0"

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet_port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd_servers=http://kube-master:4001"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--portal_net=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota"

# Add you own!
KUBE_API_ARGS=""

Configure the Controller Manager

This Controller Manager configuration file provides the list of nodes where the containers will run. Edit the /etc/kubernetes/controller-manager so that it matches the following example:

# Comma separated list of minions
KUBELET_ADDRESSES="--machines=kube-minion1, kube-minion2"

# Add you own!
KUBE_CONTROLLER_MANAGER_ARGS=""

Enable and Start the Master Services

The master node services can now be enabled on host boot and started.

for service in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do 
    systemctl enable $service
    systemctl restart $service
    systemctl status $service 
done

Load the Flannel Settings

Create a flannel-config.json file that will define the Flannel settings. The subnet specified in the Flannel settings should match that of the API server KUBE_SERVICE_ADDRESSES value. This is the subnet that the containers will use.

cat << EOF > ./flannel-config.json
{
    "Network": "10.254.0.0/16",
    "SubnetLen": 24,
    "SubnetMin": "10.254.50.0",
    "SubnetMax": "10.254.199.0",
    "Backend": {
        "Type": "vxlan",
        "VNI": 1
    }
}
EOF

Now the Flannel settings can now be loaded into etcd using curl.

curl -L http://kube-master:4001/v2/keys/flannel/network/config -XPUT --data-urlencode value@./flannel-config.json

Configure the Minion Nodes

Finally, the following Configure Minion Hosts setup will take place on the container nodes only.

Configure the Kubelet Service

The Kubelet configuration file handles the IP binding of the kubelet service, the optional kubelet hostname, and the location of the Kubernetes API server. The /etc/kubernetes/kubelet configuration file should match the following example:

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname_override=kube-minion1"

# location of the api-server
KUBELET_API_SERVER="--api_servers=http://kube-master:8080"

# Add your own!
KUBELET_ARGS=""

Enable and Start the Minion Services

The minion services can now be enabled on host boot and started.

for service in kube-proxy kubelet docker flanneld; do
    systemctl enable $service
    systemctl restart $service
    systemctl status $service 
done

Verify Success

Log into the Kubernetes master and verify the minion hosts appear as Ready using the kubectl command. The results should be similar to the following example:

# kubectl get nodes
NAME           LABELS    STATUS
kube-minion1   <none>    Ready
kube-minion2   <none>    Ready

The Kubernetes cluster is now ready for pods and services.

 
  • Really useful tutorial, thanks! A couple of things I ran into: I needed to make the hostname of the master resolve to 127.0.0.1 locally, I just dropped it into my /etc/hosts on the localhost line, before doing that, when starting the apiserver I would get reflector.go:133] Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/ and it would fail to start.

    Another minor point, on the yum installs, the package for flanneld is called 'flannel' as opposed to 'flannel' not sure if that's changed in the repos or if it was always this way!

    I also needed to change the etcd conf to listen on an externally available address for the minions to connect to it, defaults to 127.0.0.1.

  • @rikkuness - I appreciate your input. The iterations of Kubernetes have been so rapid that what worked one week might not work the next. Default configuration files and package names have changed several time throughout the release cycle. With the release of 1.0, hopefully, these type of changes will reduce and the structure will remain more consistent.

    The last time I deployed Kubernetes, I also ran into some of the issues you describe. I'll see if I can allocate some time to refresh the tutorial with the latest 1.x version of Kubernetes. Your input should make that easier - thanks!

  • You can follow up the repository made by one of our developer with an additional thing of Horizontal Pod autoscaling of stateless application. https://github.com/vevsatechnologies/Install-Kubernetes-on-CentOs

Log In, Add a Comment