KUBIC Installation Manual

이 문서는 Container 관리툴인 KUBIC(UCIM)의 설치 매뉴얼이다.

서버 구성

테스트 환경의 구성상 서버들은 모두 VM으로 구성하였다. 서비스를 위한 실질적인 container가 구동되는 kubenetes minion들은 가능하다면 물리 머신으로 설치하는게 좋다.

서버 유형 Hostname OS CPU 메모리 서비스망 IP 내부망 IP 계정 비고
가상 머신 ucim-manager Debian Jessie 2 Cores 2G 192.168.0.111 (/24) 10.1.1.111 (/24) root, ucim ucim manager
가상 머신 kube-master CentOS 7 2 Cores 2G 192.168.0.112 (/24) 10.1.1.112 (/24) root, ucim kubernetes master
가상 머신 kube-minion1 CentOS 7 2 Cores 2G 192.168.0.113 (/24) 10.1.1.114 (/24) root, ucim kubernetes minion (node)
가상 머신 kube-minion2 CentOS 7 2 Cores 2G 192.168.0.114 (/24) 10.1.1.114 (/24) root, ucim kubernetes minion (node)
  • 고객 요청에 의해 ucim manager이외 kubenetes cluster 서버들은 모두 centos 7을 이용하였다.
  • ucim이라는 계정을 이용하여 ucim 관련 서비스들을 제어한다. 따라서 모든 서버에 ucim이라는 계정을 추가하여야한다.
  • 주의사항 : 여러 노드에 걸쳐 설치 작업(command)이 진행된다. 헛갈림을 방지하기 위해 각 command마다 prompt를 달았다. 어떤 user인지 어떤 host인지 잘 확인하도록 하자.

디스크 구성

Hostname 디스크용량 파티션
ucim-manager 10G swap: 1G, /(ext4): 9G
kube-master, kube-minion1, kube-minion2 10G swap: 1G, /(ext4): 5G, /data: 1G, LVM: 3G
  • kubenetes cluster는 LVM partition을 추가하였다. debian계열의 경우 docker storage backend type으로 aufs를 default를 사용하는 반면, rhel계열의 경우 aufs가 kernel에 포함되어 있지 않다. default로 devicemapper를 권장한다. 따라서 LVM을 storage로 사용하기 위해 파티션을 추가 생성해야한다.

하이퍼바이저 구성: 설치/설정

위 서버들이 구동될 하이퍼바이저이다. 고객 편의상 CentOS 7, KVM으로 구성하였다.

서버 유형 OS Hypervisor 서비스망 IP 내부망 IP
물리 서버 CentOS 7 KVM 192.168.0.22 10.1.1.22

기본 패키지 설치

CentOS 7을 minimal로 설치했으므로 기본 패키지들을 설치한다. ifconfig 명령어를 사용하기 위해서 net-tools 패키지를 설치하였다.

[root@hypervisor ~]# yum install net-tools virt-viewer wget

NetworkManager 사용 중지

[root@hypervisor ~]# systemctl stop NetworkManager
[root@hypervisor ~]# systemctl mask NetworkManager
Created symlink from /etc/systemd/system/NetworkManager.service to /dev/null.

SELINUX 사용 중지

[root@hypervisor ~]# vi /etc/selinux/config
SELINUX=disabled

물리 서버를 리부팅한다.

[root@hypervisor ~]# reboot
[root@hypervisor ~]# getenforce
Disabled        --> Disabled로 출력되어야 

KVM 관련 패키지 설치

[root@hypervisor ~]# yum -y install qemu-kvm libvirt virt-install bridge-utils

kvm 모듈이 올라와 있는지 확인한다.

[root@hypervisor ~]# lsmod |grep kvm
kvm_intel             170181  0
kvm                   554609  1 kvm_intel
irqbypass              13503  1 kvm
[root@hypervisor ~]# systemctl start libvirtd
[root@hypervisor ~]# systemctl enable libvirtd

virsh가 임의로 생성한 기본 bridge를 삭제한다.

[root@hypervisor ~]#  virsh net-destroy default

Bridge 설정

내부망 Bridge

[root@hypervisor ~]# vi /etc/sysconfig/network-script/enp2s0
TYPE="Ethernet"
NAME="enp2s0"
DEVICE="enp2s0"
ONBOOT="yes"
BRIDGE=br0

[root@hypervisor ~]# vi /etc/sysconfig/network-scripts/ifcfg-br0
TYPE=Bridge
BOOTPROTO=static
IPV4_FAILURE_FATAL=no
NAME=br0
DEVICE=br0
ONBOOT=yes
IPADDR=10.1.1.22

서비스망 Bridge

[root@hypervisor ~]# vi /etc/sysconfig/network/script/enp5s0
TYPE=Ethernet
NAME=enp5s0
DEVICE=enp5s0
ONBOOT=yes
BRIDGE=br1

[root@hypervisor ~]# vi /etc/sysconfig/network-scripts/ifcfg-br1
TYPE=Bridge
BOOTPROTO=static
NAME=br1
DEVICE=br1
ONBOOT=yes
IPADDR=192.168.0.22
PREFIX=24
GATEWAY=192.168.0.1
DNS1=168.126.63.1

네트워크 재시작한다.

[root@hypervisor ~]#  systemctl restart network

생성한 Bridge를 확인한다.

[root@hypervisor ~]#  brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.0023540d61da       no              enp2s0
br1             8000.0010a72124ed       no              enp5s0

가상머신 설치

이제 하이퍼바이저에서 구동될 서버들을 본격적으로 생성할것이다. 아래는 KVM의 예로 작성되어 있으나, XEN이든 물리머신이 직접 설치하든 상관없다.

Debian OS 설치

서버 유형 Hostname OS CPU Mem 서비스망 IP 내부망 IP 계정 비고
가상 머신 ucim-manager Debian Jessie 2Cores 2G 192.168.0.111 10.1.1.111 root, ucim ucim manager
VM 생성

작업의 편의성을 위해 script를 생성한다.

[root@hypervisor ~]# cat /data/vm/scripts/ucim-manager.sh
#!/bin/bash

virt-install --virt-type kvm --name ucim-manager\
      --cdrom /data/vm/iso/debian-8.8.0-amd64-netinst.iso --hvm \
      --os-variant debian8 \
      --disk path=/data/vm/img/ucim-manager.qcow2,size=10,format=qcow2 \
      --vcpus 2 --memory 2048 \
      --network bridge=br0,model=virtio \
      --network bridge=br1,model=virtio \
      --boot cdrom,hd \
      --graphics vnc,listen=0.0.0.0,password=your-password

해당 스크립트를 실행한다.

[root@hypervisor ~]# /data/vm/scripts/ucim-manager.sh

콘솔(VNC)로 접속해 설치를 진행하기 위해 VNC port를 확인하자.

[root@hypervisor ~]# virsh list
Id    이름                         상태
1     ucim-manager                    실행중

virsh vncdisplay <virsh list로 확인한 VM_Id값> 을 입력한다. 위 경우에는 virsh vncdisplay 1이 된다.

[root@hypervisor ~]#  virsh vncdisplay 1
:0

VNC 뷰어에 아래 주소를 입력하여 접속한 후 설치를 진행한다. VNC 접속 비밀번호는 password=your-password 에 설정한 값을 입력한다.

192.168.0.22:0 또는 192.168.0.22:5900

참고로 192.168.0.22는 Hypervisor의 서비스 ip 주소이다.

콘솔로 접속되면 OS 설치를 진행할수 있다. OS 설치는 최소 설치로 진행한다. 위에서 언급한 파티션에만 주의하도록 하자. 기타 나머지 package들이나 계정등은 아래에서 따로 설정을 진행할 것이다.

CentOS OS 설치

서버 유형 Hostname OS CPU 메모리 서비스망 IP 내부망 IP 계정 비고
가상 머신 kube-master CentOS 7 2 Cores 2G 192.168.0.112 (/24) 10.1.1.112 (/24) root, ucim kubernetes master
가상 머신 kube-minion1 CentOS 7 2 Cores 2G 192.168.0.113 (/24) 10.1.1.114 (/24) root, ucim kubernetes minion
가상 머신 kube-minion2 CentOS 7 2 Cores 2G 192.168.0.114 (/24) 10.1.1.114 (/24) root, ucim kubernetes minion
템플릿 이미지 생성

kubenetes cluster의 경우 모두 CentOS7의 기반의 동일한 파티션구조이다. 따라서 template image를 생성한뒤, virt-clone으로 복사하여 진행할것이다. 먼저 template을 생성하자.

[root@hypervisor ~]# cat /data/vm/scripts/centos7-tpl.sh
#!/bin/sh
virt-install \
    --cdrom /data/vm/iso/CentOS-7-x86_64-Minimal-1611.iso \
    --virt-type kvm \
    --name centos7-tpl \
    --hvm \
    --os-variant centos7.0 \
    --disk path=/vm/img/centos7-tpl,size=10,format=qcow2 \
    --vcpus 2 \
    --memory 2048 \
    --network bridge=br0,model=virtio \
    --network bridge=br1,model=virtio \
    --boot cdrom,hd \
    --graphics vnc,listen=0.0.0.0,password=your-password

위 스크립트를 실행하여 Debian 설치와 동일한 방법으로 설치를 진행한다.

VM 생성

템플릿 이미지에서 가상머신을 생성하기 위해서 다음 두 개의 스크립트를 각각 실행한다.

[root@hypervisor ~]# cat /data/vm/scripts/kube-master.sh
#!/bin/bash

virt-clone \
--original centos7-tpl \
--name kube-master \
--file /data/vm/img/kube-master.qcow2

[root@hypervisor ~]# sh /data/vm/scripts/kube-master.sh

템플릿 복사가 완료되면 아래 명령으로 가상머신(kube-master)을 실행한다.

[root@hypervisor ~]# virsh start kube-master

뒤이어 minion들의 복사를 진행한다.

[root@hypervisor ~]# cat /data/vm/scripts/kube-minion1.sh
#!/bin/bash

virt-clone \
--original centos7-tpl \
--name kube-minion1 \
--file /data/vm/img/kube-minion1.qcow2

[root@hypervisor ~]# cat /data/vm/scripts/kube-minion2.sh
#!/bin/bash

virt-clone \
--original centos7-tpl \
--name kube-minion2 \
--file /data/vm/img/kube-minion2.qcow2
[root@hypervisor ~]# sh /data/vm/scripts/kube-minion1.sh
[root@hypervisor ~]# sh /data/vm/scripts/kube-minion2.sh

템플릿 복사가 완료되면 아래 명령으로 minion들을 구동한다.

[root@hypervisor ~]# virsh start kube-minion1
[root@hypervisor ~]# virsh start kube-minion2

가상머신 접속

virsh list 를 입력한다.

[root@hypervisor ~]#  virsh list
Id    이름                         상태
----------------------------------------------------
65    ucim-manager                   실행중
66    kube-master                    실행중
67    kube-kubeminion1               실행중
68    kube-kubeminion1               실행중

OS가 모두 설치되었으면 이제 콘솔로 접근하여 나머지 설치/설정을 진행한다. 위에서 진행한것처럼 VNC로 각 VM에 접속한다. 만약 VNC 접속에 실패한다면 Hypervisor에서 firewalld가 막진 않았는지 확인해보자.

[root@hypervisor network-scripts]#  systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@hypervisor network-scripts]#  systemctl stop firewalld
[root@hypervisor network-scripts]#
[root@hypervisor network-scripts]#  systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

 5월 22 22:47:19 hypervisor systemd[1]: Starting firewalld - dynamic firewall daemon...
 5월 22 22:47:19 hypervisor systemd[1]: Started firewalld - dynamic firewall daemon.
 5월 24 18:21:14 hypervisor systemd[1]: Stopping firewalld - dynamic firewall daemon...
 5월 24 18:21:15 hypervisor systemd[1]: Stopped firewalld - dynamic firewall daemon.

ucim-manager 구성

hostname 변경

아래 명령으로 hostname을 변경한 후 로그아웃하면 변경된 hostname을 확인할 수 있다.

root@ucim-manager:~# hostnamectl set-hostname ucim-manager

네트워크 설정

root@ucim-manager:~# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        address 10.1.1.111
        netmask 255.255.255.0

auto eth1
iface eth1 inet static
        address 192.168.0.111
        netmask 255.255.255.0
        gateway 192.168.0.1
        dns-nameservers 168.126.63.1

네트워크를 재시작한다.

root@ucim-manager:~# systemctl restart networking

/etc/hosts 파일 수정

/etc/hosts파일에 각 노드들에 내부 IP를 추가한다.

root@ucim-manager:~# vi /etc/hosts
...생략
10.0.0.111      ucim-manager
10.0.0.112      kube-master
10.0.0.113      kube-minion1
10.0.0.114      kube-minion2

관련 패키지 설치

root@ucim-manager:~# apt-get update && apt-get install -y sudo python3-venv virtualenv gcc python3-dev libssl-dev whois vim tmux git tree ntpdate ntp gcc make

ucim user에 대한 sudo 권한을 설정한다. 아래 내용을 추가한 뒤 저장한다.

root@ucim-manager:~# vi /etc/sudoers
# User privilege specification
root    ALL=(ALL:ALL) ALL
+ucim  ALL=(ALL:ALL) NOPASSWD:ALL

python의 가상환경을 생성한다.

root@ucim-manager:~# su - ucim
ucim@ucim-manager:~$ mkdir .envs
ucim@ucim-manager:~$ pyvenv ~/.envs/kubic
ucim@ucim-manager:~$ source ~/.envs/kubic/bin/activate
(kubic) ucim@ucim-manager:~$ vi ~/.bashrc
...생략
+#kubic virtualenv
+source .envs/kubic/bin/activate

python packages 설치를 위해 pip를 upgrade한다.

(kubic) ucim@ucim-manager:~$ pip install -U pip
Downloading/unpacking pip from https://pypi.python.org/packages/b6/ac/7015eb97dc749283ffdec1c3a88ddb8ae03b8fad0f0e611408f196358da3/pip-9.0.1-py2.py3-none-any.whl#md5=297dbd16ef53bcef0447d245815f5144
  Downloading pip-9.0.1-py2.py3-none-any.whl (1.3MB): 1.3MB downloaded
Installing collected packages: pip
  Found existing installation: pip 1.5.6
    Uninstalling pip:
      Successfully uninstalled pip
Successfully installed pip
Cleaning up...

아래 명령으로 python3 관련 패키지, Flask webframework 패키지, ansible 등을 설치한다. ansible은 ucim-master에서 kubenetes cluster을 설치할때 이용할것이다. ansible 최신 버전인 2.3.0은 python 3.4와 호환 문제가 있어 2.2.1로 설치를 진행하였다.

(kubic) ucim@ucim-manager:~$ pip install flask-restplus python-etcd kubernetes requests bcrypt flask-jwt-extended flask-cors ansible==2.2.1.0

Docker 설치

ucim-manager에 Docker를 설치한다.

root@ucim-manager:~# apt update
root@ucim-manager:~# apt install -y apt-transport-https ca-certificates curl software-properties-common
root@ucim-manager:~# curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
OK

apt-key finger | grep 9DC8로 확인한 key fingerprint값은 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88 이어야한다.

root@ucim-manager:~# apt-key finger | grep 9DC8
      Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
root@ucim-manager:~# add-apt-repository \
>    "deb [arch=amd64] https://download.docker.com/linux/debian \
>    $(lsb_release -cs) \
>    stable"

추가된 repository를 update한다.

root@ucim-manager:~# apt update

docker package를 설치한다.

root@ucim-manager:~# apt install -y docker-ce

ucim 사용자에게 docker 그룹 권한을 준다.

root@ucim-manager:~# gpasswd -a ucim docker
사용자 ucim을() docker 그룹에 등록 

docker가 정상적으로 구동되었는지 확인한다.

root@ucim-manager:~# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled)
   Active: active (running) since 목 2017-06-01 15:15:14 KST; 2min 47s ago
     Docs: https://docs.docker.com
 Main PID: 14511 (dockerd)
   CGroup: /system.slice/docker.service
           ├─14511 /usr/bin/dockerd -H fd://
           └─14514 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --me...

 6월 01 15:15:11 ucim-manager dockerd[14511]: time="2017-06-01T15:15:11.336950481+09:00" level=inf...t."
 6월 01 15:15:11 ucim-manager dockerd[14511]: time="2017-06-01T15:15:11.899091446+09:00" level=warnin...
 6월 01 15:15:12 ucim-manager dockerd[14511]: time="2017-06-01T15:15:12.146124336+09:00" level=inf...se"
 6월 01 15:15:13 ucim-manager dockerd[14511]: time="2017-06-01T15:15:13.172436189+09:00" level=inf...ss"
 6월 01 15:15:14 ucim-manager dockerd[14511]: time="2017-06-01T15:15:14.045608099+09:00" level=inf...e."
 6월 01 15:15:14 ucim-manager dockerd[14511]: time="2017-06-01T15:15:14.244513427+09:00" level=inf...on"
 6월 01 15:15:14 ucim-manager dockerd[14511]: time="2017-06-01T15:15:14.244632380+09:00" level=inf...-ce
 6월 01 15:15:14 ucim-manager systemd[1]: Started Docker Application Container Engine.
 6월 01 15:15:14 ucim-manager dockerd[14511]: time="2017-06-01T15:15:14.288390218+09:00" level=inf...ck"
 6월 01 15:15:14 ucim-manager systemd[1]: [/lib/systemd/system/docker.service:24] Unknown lvalue ...ice'
Hint: Some lines were ellipsized, use -l to show in full.

Registry 설정

ucim은 내부 registry를 구성하여 이곳에 docker image를 저장할것이다. 이 registry는 kubenetes master에서 컨테이너로 구동될것이다. 따라서 kube-master의 서비스IP를 설정한다.

ucim@ucim-manager:~$ sudo vi /etc/default/docker
# Docker Upstart and SysVinit configuration file

#
# THIS FILE DOES NOT APPLY TO SYSTEMD
#
#   Please see the documentation for "systemd drop-ins":
#   https://docs.docker.com/engine/admin/systemd/
#

# Customize location of Docker binary (especially for development testing).
#DOCKERD="/usr/local/bin/dockerd"

# Use DOCKER_OPTS to modify the daemon startup options.
#DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"

# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/"

# This is also a handy place to tweak where Docker's temporary files go.
#export DOCKER_TMPDIR="/mnt/bigdrive/docker-tmp"
+
+ DOCKER_REGISTRY="192.168.0.112:5000"

그리고 /lib/systemd/system/docker.service 을 /etc/systemd/sytem으로 복사한 후 다음과 같이 편집한다.

ucim@ucim-manager:~$ sudo cp /lib/systemd/system/docker.service /etc/systemd/system
ucim@ucim-manager:~$ sudo vi /etc/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket firewalld.service
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
+ EnvironmentFile=/etc/default/docker
- ExecStart=/usr/bin/dockerd -H fd://
+ ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=${DOCKER_REGISTRY}
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
ucim@ucim-manager:~$ sudo systemctl daemon-reload
ucim@ucim-manager:~$ sudo systemctl restart docker
ucim@ucim-manager:~$ sudo systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/etc/systemd/system/docker.service; enabled)
   Active: active (running) since 금 2017-06-02 13:04:02 KST; 14s ago
     Docs: https://docs.docker.com
 Main PID: 24384 (dockerd)
   CGroup: /system.slice/docker.service
           ├─24384 /usr/bin/dockerd -H fd:// --insecure-registry=192.168.0.112:5000
           └─24387 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 ...

 6월 02 13:04:00 ucim-manager dockerd[24384]: time="2017-06-02T13:04:00.583952562+09:00" level=warning msg="Your ...riod"
 6월 02 13:04:00 ucim-manager dockerd[24384]: time="2017-06-02T13:04:00.584734948+09:00" level=warning msg="Your ...time"
 6월 02 13:04:00 ucim-manager dockerd[24384]: time="2017-06-02T13:04:00.585027482+09:00" level=warning msg="mount...ound"
 6월 02 13:04:00 ucim-manager dockerd[24384]: time="2017-06-02T13:04:00.585657776+09:00" level=info msg="Loading ...art."
 6월 02 13:04:00 ucim-manager dockerd[24384]: time="2017-06-02T13:04:00.867068023+09:00" level=info msg="Firewall...alse"
 6월 02 13:04:01 ucim-manager dockerd[24384]: time="2017-06-02T13:04:01.513759490+09:00" level=info msg="Default ...ress"
 6월 02 13:04:01 ucim-manager dockerd[24384]: time="2017-06-02T13:04:01.842810029+09:00" level=info msg="Loading ...one."
 6월 02 13:04:02 ucim-manager dockerd[24384]: time="2017-06-02T13:04:02.050728901+09:00" level=info msg="Daemon h...tion"
 6월 02 13:04:02 ucim-manager dockerd[24384]: time="2017-06-02T13:04:02.050837158+09:00" level=info msg="Docker d....1-ce
 6월 02 13:04:02 ucim-manager systemd[1]: Started Docker Application Container Engine.
 6월 02 13:04:02 ucim-manager dockerd[24384]: time="2017-06-02T13:04:02.084817986+09:00" level=info msg="API list...sock"
Hint: Some lines were ellipsized, use -l to show in full.

Docker 실행 테스트

이제 간단한 컨테이너(hello-world)를 구동하여 docker가 잘 작동하는지 확인한다.

ucim@ucim-manager:/root$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
78445dd45222: Pull complete
Digest: sha256:c5515758d4c5e1e838e9cd307f6c6a0d620b5e07e6f927b07d05f6d12a1ac8d7
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://cloud.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/

위와 같은 메시지가 뜬다면 잘 작동한것이다.

시험한 컨테이너 청소

시험한 컨테이너를 청소하자. 우선 컨테이너를 확인한뒤, 해당 이름으로 삭제하자.

ucim@ucim-manager:~$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                     PORTS               NAMES
121b0204170b        hello-world         "/hello"            4 minutes ago       Exited (0) 4 minutes ago                       condescending_newton
ucim@ucim-manager:~$ docker rm condescending_newton
condescending_newton

받아온 hello-world image를 삭제하자.

ucim@ucim-manager:~$ docker images
hello-world               latest              48b5124b2768        4 months ago        1.84 kB
ucim@ucim-manager:~$ docker rmi hello-world

kubenetes cluster 구성

kubernetes cluster의 설치는 복잡하기도하고, 설정할 사항이 많아 ansible을 이용하여 설치를 진행할것이다. 먼저 ansible-playbook이 있는 project을 가져온다.

(kubic) ucim@ucim-manager:~$ git clone https://git.iorchard.co.kr/jijisa/secure-k8s-cent7.git
Cloning into 'secure-k8s-cent7'...
Username for 'https://git.iorchard.co.kr': westporch
Password for 'https://[email protected]':
remote: Counting objects: 98, done.
remote: Compressing objects: 100% (73/73), done.
remote: Total 98 (delta 9), reused 0 (delta 0)
Unpacking objects: 100% (98/98), done.
Checking connectivity... done.
(kubic) ucim@ucim-manager:~$ ls
secure-k8s-cent7
(kubic) ucim@ucim-manager:~$ cd secure-k8s-cent7/
(kubic) ucim@ucim-manager:~/secure-k8s-cent7$

ansible은 ssh를 이용하여 deploy를 진행한다. 따라서 현재 ucim 계정으로 kubenetes cluster의 root로 ssh public key인증 설정이 필요하다.

(kubic) ucim@ucim-manager:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ucim/.ssh/id_rsa):
Created directory '/home/ucim/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ucim/.ssh/id_rsa.
Your public key has been saved in /home/ucim/.ssh/id_rsa.pub.
The key fingerprint is:
0d:e5:47:54:b6:e2:d7:b2:cd:56:99:9c:5f:94:c0:39 ucim@ucim-manager
The key's randomart image is:
+---[RSA 2048]----+
|          ..+o+  |
|         o . E...|
|        . . o o..|
|         o o ..o+|
|        S . . o=+|
|             . =+|
|              . =|
|               . |
|                 |
+-----------------+

생성된 키를 kubenetes cluster들에게 모두 복사하자. ssh-copy-id를 이용하면 편하게 진행할수 있다.

(kubic) ucim@ucim-manager:~$ ssh-copy-id root@kube-master
The authenticity of host 'kube-master (10.1.1.118)' can't be established.
ECDSA key fingerprint is 25:70:88:75:c4:74:3d:d8:0d:fc:ee:b7:71:11:a0:a9.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@kube-master's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@kube-master'"
and check to make sure that only the key(s) you wanted were added.
(kubic) ucim@ucim-manager:~$ ssh-copy-id root@kube-minion1
(kubic) ucim@ucim-manager:~$ ssh-copy-id root@kube-minion2

이제 password 입력없이 각 cluster들로 접속이 가능한지 확인해보자.

(kubic) ucim@ucim-manager:~$ ssh root@kube-master
(kubic) ucim@ucim-manager:~$ ssh root@kube-minion1
(kubic) ucim@ucim-manager:~$ ssh root@kube-minion2

ansible을 적용할 inventory file인 hosts 파일을 수정하자. ip를 입력해도 되지만 /etc/hosts 파일에 이미 hostname을 등록하였으므로 아래와 같이 hostname으로 설정한다.

(kubic) ucim@ucim-manager:~/secure-k8s-cent7$  vi hosts
[master]
kube-master    ansible_connection=ssh  ansible_user=root

ansible-playbook에서 사용할 변수들을 수정한다. master_ip, etcd_ip, registry_ip 항목에는 kube-master의 서비스망 ip를 기록한다. kubic_password는 주석에 있는대로 mkpasswd -m sha-512 명령으로 생성할수 있다.

(kubic) ucim@ucim-manager:~/secure-k8s-cent7$ vi group_vars/all
---
timezone: Asia/Seoul
master_ip: 192.168.0.112
master_port: 6443
etcd_ip: 192.168.0.112
etcd_port: 2379
etcd_key: /kubic/network
kubic_user: ucim
# Create hash password with 'mkpasswd -m sha-512' from whois debian package.
kubic_password: $6$ZMK/0qGRjbC$39Y3WlDA6VflZ79ICgwDY7H3AvpYUHUUE4IY2ijIKi7E3l1NaFJjuEBch143N5/dwuJDfSWKtfu4kDh4ilNV8/
registry_ip: 192.168.0.112
registry_port: 5000
...

/home/ucim/secure-k8s-cent7/roles/docker/vars 디렉토리를 삭제한다. 이 디렉토리를 삭제하는 이유는 /home/ucim/secure-k8s-cent7/roles/docker/vars 에 있는 role과 /home/ucim/secure-k8s-cent7/group_vars/{master, minion} 과 role이 중복되기 때문이다. (이는 추후에 다시 commit할 것이다.)

(kubic) ucim@ucim-manager:~/secure-k8s-cent7/roles/docker$ rm -rf vars

/home/ucim/secure-k8s-cent7/roles/minion/templates/kubelet.j2 파일을 수정한다. 이것은 minion들의 이름을 hostname으로 지정하기 위함이다. 주석처리 한다면 hostname으로, 그대로 둔다면 ip address로 설정된다.

# You may leave this blank to use the actual hostname
- KUBELET_HOSTNAME="--hostname-override={{ ansible_default_ipv4.address }}"
+ #KUBELET_HOSTNAME="--hostname-override={{ ansible_default_ipv4.address }}"

kubenetes cluster 설정

kubenetes master 설정

(kubic) ucim@ucim-manager:~$ vi /home/ucim/secure-k8s-cent7/group_vars/master
---
bash_aliases:
      - { name: e, value: "etcdctl --endpoints {{ etcd_ip }}:2379"}
      - { name: kube, value: "kubectl -s {{ master_ip }}:8080" }
      - { name: watch, value: 'watch ' }
- ...
+ docker_devs: /dev/vda
+ docker_devs_partno: 4
+ ...

OS 설치했을때 docker storage로 잡았던 LVM partition의 설정을 진행한다.

  • docker_dev : kvm을 사용하여 /dev/vda이 된다.
  • docker_devs_partno : 장치번호 4번(LVM 파티션)을 사용하겠다는 의미이다.

다음은 patred -l 명령어로 확인한 kube-master의 파티션 정보이다.

[root@kube-master]~# parted -l
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system     Flags
 1      1049kB  5370MB  5369MB  primary  ext4            boot
 2      5370MB  6443MB  1074MB  primary  linux-swap(v1)
 3      6443MB  7517MB  1074MB  primary  ext4
 4      7517MB  10.7GB  3220MB  primary                  lvm

kubenetes minion 설정

(kubic) ucim@ucim-manager:~$ cat /home/ucim/secure-k8s-cent7/group_vars/minion
docker_devs: /dev/vda
docker_devs_partno: 4

역시 마찬가지다. virt-clone을 이용하여 image를 복사하였으므로, master와 동일할것이다. 다르게 설치하였다면 맞게 수정호도록 하자.

다음은 parted -l 명령어로 확인한 kube-minion1의 파티션 정보이다.

[root@kube-minion1 ~]# parted -l
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system     Flags
 1      1049kB  5370MB  5369MB  primary  ext4            boot
 2      5370MB  6443MB  1074MB  primary  linux-swap(v1)
 3      6443MB  7517MB  1074MB  primary  ext4
 4      7517MB  10.7GB  3220MB  primary                  lvm

인증서 구성을 위한 container 생성

ansible playbook을 실행하기 전에 두 종류(etcd, kubernetes)의 인증서를 생성하자. 인증서는 임시로 container를 만들어 그안에서 생성할것이다. golang의 image로 container를 생성한다. volume(-v)옵션으로 host의 /tmp를 container의 /out과 연결하였다.

ucim@ucim-manager:~$ docker run -it --rm -v /tmp:/out golang /bin/bash
Unable to find image 'golang:latest' locally
latest: Pulling from library/golang
10a267c67f42: Pull complete
fb5937da9414: Pull complete
9021b2326a1e: Pull complete
96109dbc0c87: Pull complete
b01dfb81dcfe: Pull complete
2887fcab405b: Pull complete
42bcf38edfe0: Pull complete
Digest: sha256:51f988b1a86f528c2e40681175088b5312b96bba9bea0f05bdb7ab504425c52d
Status: Downloaded newer image for golang:latest

이제 container의 shell이 떨어질 것이다. etcd project를 받아오자. etcd git project에서 tls를 쉽세 생성할수 있는 도구를 제공한다.

root@c4d13672d398:/go# git clone https://github.com/coreos/etcd
Cloning into 'etcd'...
remote: Counting objects: 62680, done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 62680 (delta 0), reused 0 (delta 0), pack-reused 62677
Receiving objects: 100% (62680/62680), 30.63 MiB | 2.87 MiB/s, done.
Resolving deltas: 100% (39062/39062), done.
Checking connectivity... done.

설정파일들을 수정하기 위해 vim 패키지를 설치한다.

root@c4d13672d398:/go# apt update
root@c4d13672d398:/go# apt install -y vim

TLS Certificate 생성

etcd 인증서 설정

/go/etcd/hack/tls-setup/etcd-config/ca-config.json 파일을 수정한다.

root@c4d13672d398:/go# cd etcd/hack/tls-setup/
root@c4d13672d398:/go/etcd/hack/tls-setup# mkdir etcd-config
root@c4d13672d398:/go/etcd/hack/tls-setup# vi etcd-config/ca-config.json
{
  "signing": {
    "default": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ],
        "expiry": "87600h"
    }
  }
}

인증서의 유효기간(expiry)는 넉넉하게 10년, 87600시간으로 설정하였다.

/go/etcd/hack/tls-setup/etcd-config/ca-csr.json 파일을 설정한다.

root@c4d13672d398:/go/etcd/hack/tls-setup# vi etcd-config/ca-csr.json
{
  "CN": "K8S ETCD CA",
  "key": {
    "algo": "ecdsa",
    "size": 384
  },
  "names": [
    {
      "O": "iorchard",
      "OU": "iorchard test",
      "L": "Gangnam-gu",
      "ST": "Seoul",
      "C": "KR"
    }
  ]
}

인증서의 O(Organization), OU(Organization Unit), L(Locality), ST(State), C(Country)는 알맞게 설정하자.

/go/etcd/hack/tls-setup/etcd-config/server.json 파일을 설정한다.

항목 내용
192.168.0.112 kubernetes master 서버의 서비스망 ip
10.1.1.112 kubernetes master 서버의 관리망 ip
kube-master kubernetes master 서버의 호스트네임
root@c4d13672d398:/go/etcd/hack/tls-setup# vi etcd-config/server.json
{
  "CN": "K8S ETCD Server",
  "hosts": [
    "192.168.0.112",
    "10.1.1.112",
    "kube-master",
    "127.0.0.1",
    "localhost"
  ],
  "key": {
    "algo": "ecdsa",
    "size": 384
  },
  "names": [
    {
      "O": "iorchard",
      "OU": "iorchard test",
      "L": "Gangnam-gu"
    }
  ]
}

/go/etcd/hack/tls-setup/etcd-config/client.json 파일을 설정한다.

root@c4d13672d398:/go/etcd/hack/tls-setup/etcd-config# vi client.json
{
  "CN": "K8S Client",
  "hosts": [
    "127.0.0.1",
    "localhost"
  ],
  "key": {
    "algo": "ecdsa",
    "size": 384
  },
  "names": [
    {
      "O": "iorchard",
      "OU": "iorchard test",
      "L": "Gangnam-gu"
    }
  ]
}

위에서 생성한 파일은 아래와 같이 총 4개이다.

root@c4d13672d398:/go/etcd/hack/tls-setup/etcd-config# ls
ca-config.json  ca-csr.json  client.json  server.json
Kubernetes 인증서 설정

우선 /go/etcd/hack/tls-setup에 k8s-config 디렉토리를 생성한다. 이후 진행은 etcd의 인증서 설정과 동일하게 진행된다.

root@c4d13672d398:/go/etcd/hack/tls-setup# mkdir k8s-config
root@c4d13672d398:/go/etcd/hack/tls-setup# cd k8s-config
root@c4d13672d398:/go/etcd/hack/tls-setup/k8s-config# vi ca-config.json
{
  "signing": {
    "default": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ],
        "expiry": "87600h"
    }
  }
}
root@c4d13672d398:/go/etcd/hack/tls-setup/k8s-config# vi ca-csr.json
{
  "CN": "K8S CA",
  "key": {
    "algo": "ecdsa",
    "size": 384
  },
  "names": [
    {
        "O": "iorchard",
        "OU": "iorchard test",
        "L": "Gangnam-gu",
        "ST": "Seoul",
        "C": "KR"
    }
  ]
}

여기 호스트 정의를 보면 내/외부 IP, 호스트명 외에도 kubernetes.default.svc10.24.0.1 이 들어 있다. kubernetes의 친구들 중 하나인 kubedns는 kubernetes.default.svc 를 master의 IP로 기록한다. 그러므로 이 호스트명을 인증서에 넣어 주어야 한다. 그리고 ansible-playbook에 kubernetes cluster IP 대역 정의 부분이 있다. 이 대역의 첫번째 IP는 항상 master이다. 이것이 10.24.0.1(항상 이 ip로 설정할 것) 이다. 따라서 이 IP를 인증서에 넣어주어야 한다.

root@c4d13672d398:/go/etcd/hack/tls-setup/k8s-config# vi server.json
{
  "CN": "K8S Master",
  "hosts": [
    "192.168.0.111",
    "10.1.1.111",
    "10.24.0.1",
    "kube-master",
    "kubernetes.default.svc",
    "127.0.0.1",
    "localhost"
  ],
  "key": {
    "algo": "ecdsa",
    "size": 384
  },
  "names": [
    {
      "O": "iorchard",
      "OU": "iorchard test",
      "L": "Gangnam-gu"
    }
  ]
}
root@c4d13672d398:/go/etcd/hack/tls-setup/k8s-config# vi client.json
{
  "CN": "K8S Client",
  "hosts": [
    "127.0.0.1",
    "localhost"
  ],
  "key": {
    "algo": "ecdsa",
    "size": 384
  },
  "names": [
    {
      "O": "iorchard",
      "OU": "iorchard test",
      "L": "Gangnam-gu"
    }
  ]
}

kubenetes 역시 4개의 json 파일을 생성했다.

root@c4d13672d398:/go/etcd/hack/tls-setup/k8s-config# ls
ca-config.json  ca-csr.json  client.json  server.json

인증서 생성 실행

이제 위에서 생성한 파일들을 가지고 인증서를 생성하자.

/go/etcd/hack/tls-setup/Makefile의 내용을 아래와 같이 수정한다. 주의할 사항은 indent이다. 아래 내용을 복사, 붙여넣기한다면 들여쓰기한 곳은 공백문자(space)로 인식할 것이다. Makefile의 indent는 탭문자로 해야 error없이 인증서가 생성될것이다.

root@c4d13672d398:/go/etcd/hack/tls-setup# vi Makefile

.PHONY: cfssl etcd-ca etcd-req k8s-ca k8s-req clean

CFSSL   = @env PATH=$(GOPATH)/bin:$(PATH) cfssl
JSON    = env PATH=$(GOPATH)/bin:$(PATH) cfssljson

all: cfssl ca req

cfssl:
        go get -u -tags nopkcs11 github.com/cloudflare/cfssl/cmd/cfssl
        go get -u github.com/cloudflare/cfssl/cmd/cfssljson
        go get -u github.com/mattn/goreman

etcd-ca:
        mkdir -p etcd-certs
        $(CFSSL) gencert -initca etcd-config/ca-csr.json | $(JSON) -bare etcd-certs/ca

etcd-req:
        $(CFSSL) gencert \
        -ca etcd-certs/ca.pem \
        -ca-key etcd-certs/ca-key.pem \
        -config etcd-config/ca-config.json \
        etcd-config/server.json | $(JSON) -bare etcd-certs/server
        $(CFSSL) gencert \
        -ca etcd-certs/ca.pem \
        -ca-key etcd-certs/ca-key.pem \
        -config etcd-config/ca-config.json \
        etcd-config/client.json | $(JSON) -bare etcd-certs/client

k8s-ca:
        mkdir -p k8s-certs
        $(CFSSL) gencert -initca k8s-config/ca-csr.json | $(JSON) -bare k8s-certs/ca

k8s-req:
        $(CFSSL) gencert \
        -ca k8s-certs/ca.pem \
        -ca-key k8s-certs/ca-key.pem \
        -config k8s-config/ca-config.json \
        k8s-config/server.json | $(JSON) -bare k8s-certs/server
        $(CFSSL) gencert \
        -ca k8s-certs/ca.pem \
        -ca-key k8s-certs/ca-key.pem \
        -config k8s-config/ca-config.json \
        k8s-config/client.json | $(JSON) -bare k8s-certs/client

clean:
        rm -rf etcd-certs
        rm -rf k8s-certs

하나씩 실행하면 된다.

root@c4d13672d398:/go/etcd/hack/tls-setup# make cfssl
go get -u -tags nopkcs11 github.com/cloudflare/cfssl/cmd/cfssl
go get -u github.com/cloudflare/cfssl/cmd/cfssljson
go get -u github.com/mattn/goreman

ETCD CA 인증서를 생성한다.

root@c4d13672d398:/go/etcd/hack/tls-setup# make etcd-ca
mkdir -p etcd-certs
2017/06/01 08:38:02 [INFO] generating a new CA key and certificate from CSR
2017/06/01 08:38:02 [INFO] generate received request
2017/06/01 08:38:02 [INFO] received CSR
2017/06/01 08:38:02 [INFO] generating key: ecdsa-384
2017/06/01 08:38:02 [INFO] encoded CSR
2017/06/01 08:38:02 [INFO] signed certificate with serial number 278675960924285405009480117115574280739645015078

생성된 CA 인증서로 각 etcd 서버/클라이언트 인증서를 생성한다.

root@c4d13672d398:/go/etcd/hack/tls-setup# make etcd-req
2017/06/01 08:38:19 [INFO] generate received request
2017/06/01 08:38:19 [INFO] received CSR
2017/06/01 08:38:19 [INFO] generating key: ecdsa-384
2017/06/01 08:38:19 [INFO] encoded CSR
2017/06/01 08:38:19 [INFO] signed certificate with serial number 558724927463365828447249624112465740215958220873
2017/06/01 08:38:19 [INFO] generate received request
2017/06/01 08:38:19 [INFO] received CSR
2017/06/01 08:38:19 [INFO] generating key: ecdsa-384
2017/06/01 08:38:19 [INFO] encoded CSR
2017/06/01 08:38:19 [INFO] signed certificate with serial number 26199556883314088991407867264816879122388222414

kubernetes CA 인증서를 생성한다.

root@c4d13672d398:/go/etcd/hack/tls-setup# make k8s-ca
mkdir -p k8s-certs
2017/06/01 08:38:32 [INFO] generating a new CA key and certificate from CSR
2017/06/01 08:38:32 [INFO] generate received request
2017/06/01 08:38:32 [INFO] received CSR
2017/06/01 08:38:32 [INFO] generating key: ecdsa-384
2017/06/01 08:38:32 [INFO] encoded CSR
2017/06/01 08:38:33 [INFO] signed certificate with serial number 31377901701311805961211304387665127051858748615

생성된 CA 인증서로 각 kubernetes master/클라이언트 인증서를 생성한다.

root@c4d13672d398:/go/etcd/hack/tls-setup# make k8s-req
2017/06/01 08:38:47 [INFO] generate received request
2017/06/01 08:38:47 [INFO] received CSR
2017/06/01 08:38:47 [INFO] generating key: ecdsa-384
2017/06/01 08:38:47 [INFO] encoded CSR
2017/06/01 08:38:47 [INFO] signed certificate with serial number 75813357150273699708100669609990020134422432783
2017/06/01 08:38:47 [INFO] generate received request
2017/06/01 08:38:47 [INFO] received CSR
2017/06/01 08:38:47 [INFO] generating key: ecdsa-384
2017/06/01 08:38:47 [INFO] encoded CSR
2017/06/01 08:38:47 [INFO] signed certificate with serial number 620880116029048902977824627724139589416964585262

etcd 인증서는 etcd-certs/에, kubernetes 인증서는 k8s-certs/에 있다.

다음은 etcd-certs이다.

root@c4d13672d398:/go/etcd/hack/tls-setup/etcd-certs# ls -lh
total 36K
-rw------- 1 root root  288 Jun  1 08:38 ca-key.pem
-rw-r--r-- 1 root root  570 Jun  1 08:38 ca.csr
-rw-r--r-- 1 root root  895 Jun  1 08:38 ca.pem
-rw------- 1 root root  288 Jun  1 08:38 client-key.pem
-rw-r--r-- 1 root root  590 Jun  1 08:38 client.csr
-rw-r--r-- 1 root root  977 Jun  1 08:38 client.pem
-rw------- 1 root root  288 Jun  1 08:38 server-key.pem
-rw-r--r-- 1 root root  643 Jun  1 08:38 server.csr
-rw-r--r-- 1 root root 1.1K Jun  1 08:38 server.pem

다음은 k8s-certs이다.

root@c4d13672d398:/go/etcd/hack/tls-setup/k8s-certs# ls -lh
total 36K
-rw------- 1 root root  288 Jun  1 08:38 ca-key.pem
-rw-r--r-- 1 root root  562 Jun  1 08:38 ca.csr
-rw-r--r-- 1 root root  879 Jun  1 08:38 ca.pem
-rw------- 1 root root  288 Jun  1 08:38 client-key.pem
-rw-r--r-- 1 root root  590 Jun  1 08:38 client.csr
-rw-r--r-- 1 root root  969 Jun  1 08:38 client.pem
-rw------- 1 root root  288 Jun  1 08:38 server-key.pem
-rw-r--r-- 1 root root  676 Jun  1 08:38 server.csr
-rw-r--r-- 1 root root 1.1K Jun  1 08:38 server.pem

인증서 복사

이제 etcd-certs, k8s-certs를 container 밖으로 빼자.

root@c4d13672d398:/go/etcd/hack/tls-setup/k8s-certs# cd ../..
root@c4d13672d398:/go/etcd/hack# cp -a tls-setup/ /out
root@c4d13672d398:/go/etcd/hack# ls -lh /out
total 12K
drwxr-xr-x 2 root root 4.0K Jun  1 06:35 etcd-cert
drwxr-xr-x 7 root root 4.0K Jun  1 08:38 tls-setup
root@c4d13672d398:/go/etcd/hack# exit

복사가 완료되면 exit로 container를 나오자. container 구동시 --rm 옵션을 줬으므로 쉘 종료시 해당 container는 자동으로 삭제된다. 컨테이너의 /out에 복사한 인증서는 ucim manager의 /tmp 에서 확인할 수 있다.

root@ucim-manager:/home/ucim# tree -h /tmp
/tmp
├── [4.0K]  etcd-cert
├── [4.0K]  tls-setup
│   ├── [1.3K]  Makefile
│   ├── [2.2K]  Procfile
│   ├── [ 954]  README.md
│   ├── [4.0K]  config
│   │   ├── [ 203]  ca-config.json
│   │   ├── [ 280]  ca-csr.json
│   │   └── [ 218]  req-csr.json
│   ├── [4.0K]  etcd-certs
│   │   ├── [ 288]  ca-key.pem
│   │   ├── [ 570]  ca.csr
│   │   ├── [ 895]  ca.pem
│   │   ├── [ 288]  client-key.pem
│   │   ├── [ 590]  client.csr
│   │   ├── [ 977]  client.pem
│   │   ├── [ 288]  server-key.pem
│   │   ├── [ 643]  server.csr
│   │   └── [1.0K]  server.pem
│   ├── [4.0K]  etcd-config
│   │   ├── [ 204]  ca-config.json
│   │   ├── [ 223]  ca-csr.json
│   │   ├── [ 243]  client.json
│   │   └── [ 314]  server.json
│   ├── [4.0K]  k8s-certs
│   │   ├── [ 288]  ca-key.pem
│   │   ├── [ 562]  ca.csr
│   │   ├── [ 879]  ca.pem
│   │   ├── [ 288]  client-key.pem
│   │   ├── [ 590]  client.csr
│   │   ├── [ 969]  client.pem
│   │   ├── [ 288]  server-key.pem
│   │   ├── [ 676]  server.csr
│   │   └── [1.0K]  server.pem
│   └── [4.0K]  k8s-config
│       ├── [ 204]  ca-config.json
│       ├── [ 228]  ca-csr.json
│       ├── [ 235]  client.json
│       └── [ 349]  server.json

이제 생성한 각 인증서를 ansible playbook의 role에 맞는 files 디렉토리에 복사하자. 아래 명령은 ucim-manager 서버에서 ucim 계정으로 진행한다.

ucim@ucim-manager:~$ sudo cp /tmp/tls-setup/etcd-certs/*.pem ~ucim/secure-k8s-cent7/roles/etcd/files/etcd-certs/
ucim@ucim-manager:~$ sudo cp /tmp/tls-setup/k8s-certs/*.pem ~ucim/secure-k8s-cent7/roles/master/files/k8s-certs/

kube-master 설정에서 한가지 더 해야 할 것은 Token 생성이다. 아래와 같이 random string으로 생성한뒤, tokens.csv을 생성하자.

ucim@ucim-manager:~$ TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 |tr -d '=+/' |dd bs=32 count=1 2>/dev/null)
ucim@ucim-manager:~$ echo "${TOKEN},kubelet,kubelet" > ~ucim/secure-k8s-cent7/roles/master/files/k8s-certs/tokens.csv

kubenetes binary 복사

kubernetes 최신 Server binary(kubernetes-server-linux-amd64.tar.gz)를 다운로드 받아 playbook files/에 넣어 놓자.

먼저 kubernetes-server-linux-amd64.tar.gz를 다운받는다.

ucim@ucim-manager:~ $ wget https://dl.k8s.io/v1.6.4/kubernetes-server-linux-amd64.tar.gz
--2017-06-01 18:32:34--  https://dl.k8s.io/v1.6.4/kubernetes-server-linux-amd64.tar.gz
Resolving dl.k8s.io (dl.k8s.io)... 23.236.58.218
Connecting to dl.k8s.io (dl.k8s.io)|23.236.58.218|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://storage.googleapis.com/kubernetes-release/release/v1.6.4/kubernetes-server-linux-amd64.tar.gz [following]
--2017-06-01 18:32:35--  https://storage.googleapis.com/kubernetes-release/release/v1.6.4/kubernetes-server-linux-amd64.tar.gz
Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.26.48, 2404:6800:4004:81b::2010
Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.26.48|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 363978786 (347M) [application/x-tar]
Saving to: ‘kubernetes-server-linux-amd64.tar.gz’

kubernetes-server-linux-amd64.tar.gz 100%[========================================================================>] 347.12M  11.3MB/s   in 32s

2017-06-01 18:33:07 (10.8 MB/s) - ‘kubernetes-server-linux-amd64.tar.gz’ saved [363978786/363978786]

압축을 해제한다.

ucim@ucim-manager:~$ tar xvfz kubernetes-server-linux-amd64.tar.gz
ucim@ucim-manager:~$ ls
kubernetes  kubernetes-server-linux-amd64.tar  secure-k8s-cent7

이제 binary 파일을 복사하자.

~ucim/secure-k8s-cent7/roles/master/files/에 옮길 binary는 다음과 같다.

  • kube-controller-manager
  • kube-apiserver
  • kube-scheduler
  • kubectl
ucim@ucim-manager:~$ cd kubenetes/server/bin
ucim@ucim-manager:~/kubenetes/server/bin$ cp kube-controller-manager ~ucim/secure-k8s-cent7/roles/master/files/
ucim@ucim-manager:~/kubenetes/server/bin$ cp kube-apiserver ~ucim/secure-k8s-cent7/roles/master/files/
ucim@ucim-manager:~/kubenetes/server/bin$ cp kube-scheduler ~ucim/secure-k8s-cent7/roles/master/files/
ucim@ucim-manager:~/kubenetes/server/bin$ cp kubectl ~ucim/secure-k8s-cent7/roles/master/files/

/home/ucim/secure-k8s-cent7/roles/minion/files/에 옮길 binary는 다음과 같다.

  • kubelet
  • kube-proxy
ucim@ucim-manager:~/kubenetes/server/bin$ cp kubelet ~ucim/secure-k8s-cent7/roles/minion/files/
ucim@ucim-manager:~/kubenetes/server/bin$ cp kube-proxy ~ucim/secure-k8s-cent7/roles/minion/files/

모든 ownership을 playbook을 실행할 사용자로 바꾸자.

ucim@ucim-manager:~$ sudo chown -R ucim:ucim ~ucim/secure-k8s-cent7/roles/

이제 ansible에 대한 모든 준비가 끝났다.

kubenetes master 설치/실행

아래 명령으로 ansible-playbook을 실행한다. ansible-playbook을 master로 제한한다.

(kubic) ucim@ucim-manager:~/secure-k8s-cent7$ ansible-playbook -i hosts site.yml --limit=master

PLAY [Apply common configuration to all nodes] *********************************

TASK [setup] *******************************************************************
ok: [kube-master]
(..생략..)
PLAY RECAP *********************************************************************
kube-master       : ok=54   changed=22   unreachable=0    failed=0

failed=0으로 정상적으로 진행되었다. kube-master에 접속해 확인해보자.

[ucim@kube-master root]$ kubectl get cs
NAME                 STATUS      MESSAGE                                                                     ERROR
scheduler            Healthy     ok
controller-manager   Healthy     ok
etcd-0               Unhealthy   Get https://192.168.0.112:2379/health: remote error: tls: bad certificate

etcd-0가 unhealthy하다고 한다. 이것은 kubernetes bug이다. https에 대한 대처를 잘못 하고 있다. 언제가 수정될 것으로 보인다. 그래도 전체 클러스터 작동에는 문제가 없다.

ansible 설치를 진행하면서 kube-master엔 추가로 2개의 container가 생성된다. 내부 docker image관리를 위한 registry, kubernetes와 ucim의 key value db로 사용할 etcd이다. 2개의 container가 정상적으로 구동되어 있는지 확인해보자.

[ucim@kube-master root]$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                                      NAMES
245a68f57cb7        registry:2            "/entrypoint.sh /etc/"   2 minutes ago          Up 5 minutes           0.0.0.0:5000->5000/tcp                                     registry
eee08bdd2098        quay.io/coreos/etcd   "etcd --name etcd --t"   2 minutes ago          Up 5 minutes           0.0.0.0:2379-2380->2379-2380/tcp, 0.0.0.0:4001->4001/tcp   etcd

kubenetes minions 설치/실행

master가 정상적으로 설치되었다면 이제 minions을 설치하자.

(kubic) ucim@ucim-manager:~/secure-k8s-cent7$ vi hosts
[master]
kube-master    ansible_connection=ssh  ansible_user=root

+ [minion]
+ kube-minion1  ansible_connection=ssh  ansible_user=root
+ kube-minion2  ansible_connection=ssh  ansible_user=root
(kubic) ucim@ucim-manager:~/secure-k8s-cent7$ ansible-playbook -i hosts site.yml --limit=minion

PLAY [Apply common configuration to all nodes] *********************************

TASK [setup] *******************************************************************
ok: [kube-minion1]
ok: [kube-minion2]
(..생략..)
PLAY RECAP *********************************************************************
kube-minion1     : ok=41   changed=28   unreachable=0    failed=0
kube-minion2     : ok=41   changed=28   unreachable=0    failed=0

정상적으로 설치 되었는지는 kube-master에서 확인할 수 있다.

[ucim@kube-master root]$ kubectl get no
NAME                     STATUS    AGE       VERSION
kube-minion1   Ready     21m       v1.6.4
kube-minion2   Ready     22m       v1.6.4

kube-master에서 UCIM VM(ucim-manager)에 etcdctl/kubectl 바이너리를 복사해 놓자.

root@ucim-manager:~# scp root@kube-master:/usr/local/bin/etcdctl /usr/local/bin/
root@kube-master's password:
etcdctl                                                                                   100%   13MB  13.3MB/s   00:01
root@ucim-manager:~# scp root@kube-master:/usr/bin/kubectl /usr/local/bin/
root@kube-master's password:
kubectl                                                                                   100%   67MB  67.4MB/s   00:01

kubentes의 설치는 모두 완료되었다.

UCIM 설치

이제 본격적으로 ucim을 설치를 진행하자. 먼저 소스를 받아온다.

(kubic) ucim@ucim-manager:~$ git clone https://git.iorchard.co.kr/jijisa/kubic.git
Cloning into 'kubic'...
Username for 'https://git.iorchard.co.kr': westporch
Password for 'https://[email protected]':
remote: Counting objects: 463, done.
remote: Compressing objects: 100% (341/341), done.
remote: Total 463 (delta 208), reused 228 (delta 90)
Receiving objects: 100% (463/463), 149.71 KiB | 0 bytes/s, done.
Resolving deltas: 100% (208/208), done.
Checking connectivity... done.
(kubic) ucim@ucim-manager:~$ cd kubic/kubic
(kubic) ucim@ucim-manager:~/kubic/kubic$ cp config.py.dev config.py
(kubic) ucim@ucim-manager:~/kubic/kubic$ cp settings.py.dev settings.py

초기 관리자 salt/password를 생성한다.

관리자 password는 test1234 로 설정했다.

(kubic) ucim@ucim-manager:~/kubic/kubic$ cd ../scripts/
(kubic) ucim@ucim-manager:~/kubic/scripts$ python create_adminpw.py
Password:
salt: b'$2b$12$MqIRwYgyRxWzfxa1i0wbKe'
adminpw: b'$2b$12$MqIRwYgyRxWzfxa1i0wbKeAltdE7dRmSrdpif/WL1IsG28d3hkDKK'

출력된 salt과 adminpw 를 config.py에 기입한다.

(kubic) ucim@ucim-manager:~/kubic/scripts$ cd ../kubic/
(kubic) ucim@ucim-manager:~/kubic/kubic$ vi config.py
import os
from datetime import timedelta

# ADMIN_PW (bcrypt hashed pw)
# bcrypt.gensalt()
# bcrypt.hashpw("plaintext_password", "gensalt_output")
ADMIN_ID = 'admin'
- ADMIN_SALT = '$2b$12$HSUOXfXmbKMQtdIw2fyVqe'
+ ADMIN_SALT = '$2b$12$MqIRwYgyRxWzfxa1i0wbKe'
- ADMIN_PW = '$2b$12$HSUOXfXmbKMQtdIw2fyVqeDQ1gu2i4E/UCApZDSh9YW2Qp33DUaEi'
+ ADMIN_PW = '$2b$12$MqIRwYgyRxWzfxa1i0wbKeAltdE7dRmSrdpif/WL1IsG28d3hkDKK'
(..생략..)

출력된 jwt key 를 config.py에 기입한다.

(kubic) ucim@ucim-manager:~/kubic/kubic$ cd ../scripts/
(kubic) ucim@ucim-manager:~/kubic/scripts$ python create_jwt_key.py
c74eb4bfe75faeea28e930b3a5ec7e0101dd6ac6586616d3
(kubic) ucim@ucim-manager:~/kubic/scripts$ cd ../kubic/
(kubic) ucim@ucim-manager:~/kubic/kubic$ vi config.py
import os
from datetime import timedelta

# ADMIN_PW (bcrypt hashed pw)
# bcrypt.gensalt()
# bcrypt.hashpw("plaintext_password", "gensalt_output")
ADMIN_ID = 'admin'
ADMIN_SALT = '$2b$12$MqIRwYgyRxWzfxa1i0wbKe'
ADMIN_PW = '$2b$12$MqIRwYgyRxWzfxa1i0wbKeAltdE7dRmSrdpif/WL1IsG28d3hkDKK'

# JWT secret key
# Create random key: binascii.hexlify(os.urandom(24)).decode()
- JWT_SECRET_KEY = '2292b81d9e8bf89b7f7ef5a8ae7f23a386fd0cd9c343c663'
+ JWT_SECRET_KEY = 'c74eb4bfe75faeea28e930b3a5ec7e0101dd6ac6586616d3'
(..생략..)

/home/ucim/kubic/kubic/config.py 파일에서 첫 번째 줄 ~ 주석(Do Do Not Edit Below!!!)까지 서버에 맞게 수정한다.

(kubic) ucim@ucim-manager:~/kubic/kubic$ vi config.py
import os                                                                                                                                   [28/1885]
from datetime import timedelta

# ADMIN_PW (bcrypt hashed pw)
# bcrypt.gensalt()
# bcrypt.hashpw("plaintext_password", "gensalt_output")
ADMIN_ID = 'admin'
ADMIN_SALT = '$2b$12$MqIRwYgyRxWzfxa1i0wbKe'
ADMIN_PW = '$2b$12$MqIRwYgyRxWzfxa1i0wbKeAltdE7dRmSrdpif/WL1IsG28d3hkDKK'

# JWT secret key
# Create random key: binascii.hexlify(os.urandom(24)).decode()
JWT_SECRET_KEY = 'c74eb4bfe75faeea28e930b3a5ec7e0101dd6ac6586616d3'
JWT_ACCESS_TOKEN_EXPIRES = timedelta(minutes=60)
#JWT_ACCESS_TOKEN_EXPIRES = timedelta(days=1)
#JWT_ACCESS_TOKEN_EXPIRES is 15 minutes default.
#JWT_REFRESH_TOKEN_EXPIRES is 1 month default.

# Terminal
- KUBIC_USER = 'jijisa'
+ KUBIC_USER = 'ucim'
KUBICSHELL = 'kubicshell'
TERM_PORT_BEGIN = 40000
TERM_PORT_END = 40100
- MY_IP = '121.254.203.198'
+ MY_IP = '192.168.0.111'   # ucim-manager의 서비스 ip

# Grafana monitor
# URL 항목에 kube-minion1의 서비스 ip를 기록
GF = {
-    'URL': 'http://121.254.203.198:3000',
+    'URL': 'http://192.168.0.113:3000',
    'CLUSTER_TRAFFIC': 32,
    'CLUSTER_MEM': 4,
    'CLUSTER_CPU': 6,
    'CLUSTER_FS': 7,
    'CLUSTER_MEM_USED': 9,
    'CLUSTER_MEM_TOTAL': 10,
    'CLUSTER_CPU_USED': 11,
    'CLUSTER_CPU_TOTAL': 12,
    'CLUSTER_FS_USED': 13,
    'CLUSTER_FS_TOTAL': 14,
}

# System images
SYSTEM_IMAGES = [{'name': 'bind'}]

#
## Do Not Edit Below!!!
#

grafana는 ucim에서 사용할 모니터링 시각화(그래프)를 위해 구동될 container이다. 이 container는 kubenetes의 pod로 실행될 것이므로 minions의 한 node에서 구동될 것이다. 여기서는 kube-minion1에서 구동하게 설정할것이므로 minion1의 서비스 IP를 설정하였다.

뒤이어 settins.py도 수정한다.

(kubic) ucim@ucim-manager:~/kubic/kubic$ vi settings.py

# Flask settings
-FLASK_SERVER_NAME = '192.168.0.130:5000'
+FLASK_SERVER_NAME = '192.168.0.111'
FLASK_HOST = '0.0.0.0'
-FLASK_PORT = 5000
+FLASK_PORT = 80
FLASK_DEBUG = True  # Do not use debug mode in production

# Flask-Restplus settings
RESTPLUS_SWAGGER_UI_DOC_EXPANSION = 'list'
RESTPLUS_VALIDATE = True
RESTPLUS_MASK_SWAGGER = False
RESTPLUS_ERROR_404_HELP = False

# ETCD settings
-ETCD_HOST = '10.1.1.131'
+ETCD_HOST = '10.0.0.112'
ETCD_PORT = 2379
ETCD_PROTO = 'https'
ETCD_PREFIX = '/kubic'

etcd의 경우 kube-master에서 구동하므로 kube-master의 내부 IP를 설정하였다.

kube-master로부터 client.pem, client-key.pem, ca.pem을 가져와 저장한다.

(kubic) ucim@ucim-manager:/$ sudo mkdir /etc/ssl/etcd
(kubic) ucim@ucim-manager:/$ sudo scp root@kube-master:/etc/ssl/etcd/client*.pem /etc/ssl/etcd/
root@kube-master's password:
client-key.pem                                                                            100%  288     0.3KB/s   00:00
client.pem                                                                                100%  977     1.0KB/s   00:00
(kubic) ucim@ucim-manager:/$ sudo scp root@kube-master:/etc/ssl/etcd/ca.pem /etc/ssl/etcd/
root@kube-master's password:
ca.pem                                                                                    100%  891     0.9KB/s   00:00

기본 docker image 생성

ucim에서 사용할 기본 docker 이미지로 kubicshell을 생성한다. ssh 접속 테스트를 위한 ssh client와 editor로 사용할 vim, 그리고 dns 검증에 사용할 dnsutil을 추가하였다. 용도에 따라 추가 패키지를 설치하도록 하자.

kubic/config.py에 정의된 이름으로 Dockerfile를 만든다.

(kubic) ucim@ucim-manager:~$ mkdir docker
(kubic) ucim@ucim-manager:~$ cd docker/
(kubic) ucim@ucim-manager:~/docker$ vi Dockerfile
FROM debian:jessie
MAINTAINER Heechul Kim <[email protected]>
LABEL version="1.0"
LABEL description="This is an image for common shell."
USER root
RUN apt-get update && \
   apt-get upgrade -y && \
   apt-get install -y openssh-client vim dnsutils && \
   apt-get clean && \
   rm -fr /var/lib/apt/lists/*

이제 dockerfile로 부터 images를 생성하자.

(kubic) ucim@ucim-manager:~/docker$ sudo docker build -t kubicshell .
Sending build context to Docker daemon 2.048 kB
Step 1/6 : FROM debian:jessie
jessie: Pulling from library/debian
10a267c67f42: Already exists
Digest: sha256:476959f29a17423a24a17716e058352ff6fbf13d8389e4a561c8ccc758245937
Status: Downloaded newer image for debian:jessie
 ---> 3e83c23dba6a
(..생략..)
Processing triggers for libc-bin (2.19-18+deb8u9) ...
Processing triggers for sgml-base (1.26+nmu4) ...
 ---> ee5c436b2b0a
Removing intermediate container cd18553f9ae2
Successfully built ee5c436b2b0a

이미지가 잘 생성되었는지 확인한다.

(kubic) ucim@ucim-manager:~/docker$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
+ kubicshell          latest              ee5c436b2b0a        2 minutes ago       229 MB
golang              latest              a0c61f0b0796        8 days ago          699 MB
debian              jessie              3e83c23dba6a        3 weeks ago         124 MB

shellinabox 설치

(kubic) ucim@ucim-manager:~$ sudo apt install libssl-dev libpam0g-dev zlib1g-dev dh-autoreconf
(kubic) ucim@ucim-manager:~$ git clone https://github.com/shellinabox/shellinabox.git
Cloning into 'shellinabox'...
remote: Counting objects: 3061, done.
remote: Total 3061 (delta 0), reused 0 (delta 0), pack-reused 3061
Receiving objects: 100% (3061/3061), 4.30 MiB | 1.57 MiB/s, done.
Resolving deltas: 100% (2411/2411), done.
Checking connectivity... done.
(kubic) ucim@ucim-manager:~$ cd shellinabox/
(kubic) ucim@ucim-manager:~/shellinabox$ autoreconf -i
libtoolize: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `m4'.
libtoolize: copying file `m4/libtool.m4'
libtoolize: copying file `m4/ltoptions.m4'
libtoolize: copying file `m4/ltsugar.m4'
libtoolize: copying file `m4/ltversion.m4'
libtoolize: copying file `m4/lt~obsolete.m4'
configure.ac:19: installing './compile'
configure.ac:24: installing './config.guess'
configure.ac:24: installing './config.sub'
configure.ac:17: installing './install-sh'
configure.ac:17: installing './missing'
Makefile.am: installing './INSTALL'
Makefile.am: installing './depcomp'
(kubic) ucim@ucim-manager:~/shellinabox$ ./configure && make
(..생략..)
sed -e "`sed -e 's/^#define *\([^ ]*\) *\(.*\)/\/^[^#]\/s\/\1\/\2 \\\\\/* \1 *\\\\\/\/g/' \
             -e t                                                     \
             -e d "demo/demo.jspp"`"                                              \
     -e "s/^#/\/\/ #/"                                                \
     -e "s/VERSION/\"2.20  (revision 98e6eeb)\"/g"                    \
     "demo/demo.jspp" >"demo/demo.js"
make[1]: Leaving directory '/home/ucim/shellinabox'
(kubic) ucim@ucim-manager:~/shellinabox$ sudo cp shellinaboxd /usr/local/bin/

shellinabox 디렉토리를 삭제한다.

(kubic) ucim@ucim-manager:~/shellinabox$ cd ..
(kubic) ucim@ucim-manager:~/$ rm -rf shellinabox/

필수 디렉토리 생성

(kubic) ucim@ucim-manager:~$ cd kubic/kubic
(kubic) ucim@ucim-manager:~/kubic/kubic$ mkdir -p log/{cmd,build} manifests

uwsgi 설정

(kubic) ucim@ucim-manager:~/kubic/kubic$ sudo apt install -y uwsgi uwsgi-plugin-python3
(kubic) ucim@ucim-manager:~/kubic/kubic$ cd ../conf
(kubic) ucim@ucim-manager:~/kubic/conf$ sudo cp uwsgi_kubic /etc/uwsgi/apps-available/kubic.ini
(kubic) ucim@ucim-manager:~/kubic/conf$ sudo ln -s /etc/uwsgi/apps-available/kubic.ini \
> /etc/uwsgi/apps-enabled/kubic.ini
(kubic) ucim@ucim-manager:~/kubic/conf$ sudo cp /usr/share/uwsgi/conf/default.ini /etc/default/uwsgi_default.ini
(kubic) ucim@ucim-manager:~/kubic/conf$ sudo vi /etc/default/uwsgi_default.ini
(..생략..)

# set mode of created UNIX socket
- chmod-socket = 660
+ chmod-socket = 666    # nginx는 www-data, 그러므로 666필요
(..생략..)

# user identifier of uWSGI processes
- uid = www-data
+ uid = ucim

# group identifier of uWSGI processes
- gid = www-data
+ gid = ucim
(kubic) ucim@ucim-manager:~$ sudo vi /etc/default/uwsgi
# Defaults for uWSGI initscript
# sourced by /etc/init.d/uwsgi

# Run automatically at system startup?
RUN_AT_STARTUP=yes

# At startup VERBOSE value is setted in 'no'. So when user invokes
# uWSGI init.d script, no output is showed.
# It could be unexpected behaviour, because it is common practice for
# init.d script to ignore VERBOSE value.
# Here VERBOSE is overriden to conform such the practice.
VERBOSE=yes

# Should init.d script print configuration file names while marking progress of
# it's execution?
#
# If 'no', then init.d script prints one-character symbols instead file names.
#
# Printing confnames is quite informative, but could mess terminal output or
# be somewhat dangerous (as filename could contain arbitary characters).
# ASCII control characters in file names are replaced with '?' in init.d script
# output, nevertheless you were warned.
PRINT_CONFNAMES_IN_INITD_SCRIPT_OUTPUT=no

# init.d script starts instance of uWSGI daemon for each found user-created
# configuration file.
#
# Options from inherited configuration file are passed to each instance by
# default. They could be overrided (or extended) by user configuration file.
- INHERITED_CONFIG=/usr/share/uwsgi/conf/default.ini
+ INHERITED_CONFIG=/etc/default/uwsgi_default.ini
(kubic) ucim@ucim-manager:~/kubic/conf$ sudo systemctl restart uwsgi

uwsgi의 pid와 socket파일이 잘 생성되어 있는지 확인해보자.

(kubic) ucim@ucim-manager:~/kubic/conf$ ls /run/uwsgi/app/kubic
pid socket

nginx 설정

(kubic) ucim@ucim-manager:~/kubic/conf$ sudo apt install -y nginx
(kubic) ucim@ucim-manager:~/kubic/conf$ sudo cp nginx_kubic /etc/nginx/sites-available/kubic
(kubic) ucim@ucim-manager:/kubic/conf$ sudo vi /etc/nginx/sites-available/kubic
server {
    listen 80;
    server_name _;

-     root /home/<kubic_user>/kubic/web;
+     root /home/ucim/kubic/web;

    location /api { try_files $uri @kubic; }
    location @kubic {
        include uwsgi_params;
        uwsgi_pass unix:/run/uwsgi/app/kubic/socket;
    }
}
(kubic) ucim@ucim-manager:~/kubic/conf$ sudo ln -sf /etc/nginx/sites-available/kubic /etc/nginx/sites-enabled/default
(kubic) ucim@ucim-manager:~/kubic/conf$ sudo systemctl restart nginx

web frontend 설치

(kubic) ucim@ucim-manager:~/kubic/conf$ cd
(kubic) ucim@ucim-manager:~$ pip install nodeenv
Collecting nodeenv
  Downloading nodeenv-1.1.2.tar.gz
Installing collected packages: nodeenv
  Running setup.py install for nodeenv ... done
Successfully installed nodeenv-1.1.2
(kubic) ucim@ucim-manager:~$ nodeenv -p
 * Install prebuilt node (8.0.0) ..... done.
 * Appending data to /home/ucim/.envs/kubic/bin/activate
(kubic) ucim@ucim-manager:~$ cd kubic/quasar
(kubic) ucim@ucim-manager:~/kubic/quasar$ npm install -g quasar-cli
(kubic) ucim@ucim-manager:~/kubic/quasar$ quasar init kubic
(kubic) ucim@ucim-manager:~/kubic/quasar$ cp -a src kubic/
(kubic) ucim@ucim-manager:~/kubic/quasar$ cp kubic/src/config.js.dev kubic/src/config.js
(kubic) ucim@ucim-manager:~/kubic/quasar$ cd kubic
(kubic) ucim@ucim-manager:~/kubic/quasar/kubic$ npm install

> [email protected] install ~/kubic/quasar/kubic/node_modules/fsevents
> node install

npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN [email protected] No repository field.
npm WARN [email protected] No license field.

added 848 packages in 135.719s
(kubic) ucim@ucim-manager:~/kubic/quasar/kubic$ npm install vue-cookie vue-resource vuelidate vuex --save
npm WARN [email protected] No repository field.
npm WARN [email protected] No license field.

added 16 packages in 18.223s

IP를 ucim 서버의 서비스 IP로 바꾸고, NAME을 UCIM으로 바꾼다.

(kubic) ucim@ucim-manager:~/kubic/quasar/kubic$ vi src/config.js
- export const API_URL = 'http://121.254.203.198:5000/api'
+ export const API_URL = 'http://192.168.0.111/api'
export const POLLING_INTERVAL = 10 * 1000
export const REFRESH_INTERVAL = 25 * 60 * 1000
- export const KUBIC_NAME = 'KUBIC'
+ export const KUBIC_NAME = 'UCIM'
(kubic) ucim@ucim-manager:~/kubic/quasar/kubic$ quasar build


> [email protected] build ~/kubic/quasar/kubic
> node build/script.build.js

 WARNING!
 Do NOT use VueRouter's "history" mode if
 building for Cordova or Electron.

 Cleaned build artifacts.

 Building Quasar App with "mat" theme...

 [==================  ] 91% (additional asset processing)
Starting to optimize CSS...
Processing app.acc2569a7047c89ef77b22e0e89413a8.css...
Processed app.acc2569a7047c89ef77b22e0e89413a8.css, before: 250420, after: 250420, ratio: 100%
Build completed in 36.013s

Hash: 104447a0527934c43022
Version: webpack 2.6.1
Time: 36021ms
                                   Asset       Size  Chunks                    Chunk Names
            js/0.dd79c89ffff38718f45a.js    3.47 kB       0  [emitted]
fonts/MaterialIcons-Regular.012cf6a.woff    57.6 kB          [emitted]
         fonts/Roboto-Light.37fbbba.woff    89.2 kB          [emitted]
        fonts/Roboto-Medium.303ded6.woff    89.7 kB          [emitted]
       fonts/Roboto-Regular.081b11e.woff    89.4 kB          [emitted]
          fonts/Roboto-Thin.90d3804.woff    87.8 kB          [emitted]
             img/quasar-logo.2f3aed5.png    50.4 kB          [emitted]
          fonts/Roboto-Bold.ad140ff.woff    89.2 kB          [emitted]
            js/1.d12cfaf0a8ad8e250adf.js    1.76 kB       1  [emitted]
                            js/vendor.js     519 kB       2  [emitted]  [big]  vendor
                               js/app.js    4.26 kB       3  [emitted]         app
                          js/manifest.js     1.5 kB       4  [emitted]         manifest
app.acc2569a7047c89ef77b22e0e89413a8.css     250 kB       3  [emitted]  [big]  app
                              index.html  634 bytes          [emitted]

 Purifying app.acc2569a7047c89ef77b22e0e89413a8.css...
 * Reduced size by 49.21%, from 244.55kb to 124.20kb.

 Build complete with "mat" theme in "/dist" folder.

 Built files are meant to be served over an HTTP server.
 Opening index.html over file:// won't work.
(kubic) ucim@ucim-manager:~/kubic/quasar/kubic$ cp -r dist ~/kubic/web

이제 웹브라우저를 열고 ucim-manager에 접속해보자. 서비스 IP인 http://192.168.0.111/로 접근할수 있다. 위에서 설정한 password로 admin user 로 관리자 로그인에 성공할수 있을것이다. 그러나 아직 로그인만 가능할뿐 제대로 작동하는것은 없을것이다. 설치/설정할 사항이 남아있다.

kubernetes 친구들 설치

이제 kubernetes 친구들을 설치할 차례이다.

  • kubedns: kubernetes cluster내 DNS서비스를 담당하는 친구
  • prometheus: kubernetes cluster내 자원사용량을 모니터링하는 친구
  • grafana: prometheus의 정보를 보여주는 친구

kubenetes에서 위 3개의 pod를 구동할것이다. 따라서 구동을 위한 파일들을 kube-master로 복사한다.

(kubic) ucim@ucim-manager:~/kubic$ scp -pr conf/ ucim@kube-master:~

kube-master 서버의 /home/ucim/conf/kubedns/dep_kubedns.yml에서 kubic을 ucim(kubic_user)으로 수정한다. ucim-manager 서버가 아닌 kube-master 서버로 접근하자.

[ucim@kube-master ~] cd conf/kubedns
[ucim@kube-master kubedns]$ cat dep_kubedns.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
spec:
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
    spec:
      containers:
      - name: kubedns
        image: gcr.io/google_containers/kubedns-amd64:1.9
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=ucim.local.
        - --dns-port=10053
        - --config-map=kube-dns
        - --v=2
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
      - name: dnsmasq
        image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --cache-size=1000
        - --no-resolv
        - --server=127.0.0.1#10053
        - --log-facility=-
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        resources:
          requests:
            cpu: 150m
            memory: 10Mi
      - name: sidecar
        image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.10.0
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.ucim.local,5,A
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.ucim.local,5,A
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don't use cluster DNS.

이제 kubedns의 service, deploy를 생성한다.

[ucim@kube-master kubedns]$ kubectl create -f svc_kubedns.yml
service "kube-dns" created
[ucim@kube-master kubedns]$ kubectl create -f dep_kubedns.yml
deployment "kube-dns" created

정상적으로 수행된다면 아래와 같이 kubedns의 3개 container가 구동되어야한다. kubedns는 kube-system이라는 namespace에서 구동된다. default namespace가 아니므로 -n옵션으로 namespace를 지정해야한다.

[ucim@kube-master kubedns]$ kubectl get po -n kube-system
NAME                        READY     STATUS             RESTARTS   AGE
kube-dns-3303503772-509dp   0/3       ContainerCreating  0          1m

STATUS보면 위와 같이 생성중임을 알수 있다. 생성이 완료된다면 아래와 같이 구동된다.

[ucim@kube-master kubedns]$ kubectl get po -n kube-system
NAME                        READY     STATUS    RESTARTS   AGE
kube-dns-3303503772-509dp   3/3       Running   0          2m

이제 prometheus를 구동하자.

[ucim@kube-master kubedns]$ cd ~/conf/prometheus
[ucim@kube-master prometheus]$ vi dep_prom.yml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    name: prometheus-deployment
  name: prometheus
  #namespace: prometheus
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
      - image: prom/prometheus:master
        name: prometheus
        command:
        - "/bin/prometheus"
        args:
        - "-config.file=/etc/prometheus/prometheus.yml"
        - "-storage.local.path=/prometheus"
        - "-storage.local.retention=24h"
        ports:
        - containerPort: 9090
          protocol: TCP
        volumeMounts:
        - mountPath: "/prometheus"
          name: data
        - mountPath: "/etc/prometheus"
          name: config-volume
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 500m
            memory: 2500Mi
      volumes:
      - name: data
        hostPath:
          path: /data/prometheus
      - name: config-volume
        configMap:
          name: prometheus
      nodeSelector:
        kubernetes.io/hostname: kube-minion1
---

nodeSelectorkubernetes.io/hostname 부분에 구동 노드를 설정한다. kubectl get no 명령으로 실행한 결과의 이름을 선택하여 입력한다.

[ucim@kube-master prometheus]$ kubectl get no
NAME                     STATUS    AGE       VERSION
kube-minion1   Ready     7h        v1.6.4
kube-minion2   Ready     7h        v1.6.4

이제 service, configmap, deploy 파일을 등록하여 pod를 생성하자.

[ucim@kube-master prometheus]$ kubectl create -f svc_prom.yml
service "prometheus" created
[ucim@kube-master prometheus]$ kubectl create -f cm_prom.yml
configmap "prometheus" created
[ucim@kube-master prometheus]$ kubectl create -f dep_prom.yml
deployment "prometheus" created

마지막으로 grafana이다. grafana도 위의 과정과 동일하다. /home/ucim/conf/grafana/dep_grafana.yml 파일에서 nodeSelector에 minion을 설정한다. 위와 마찬가지로 kubectl get no의 Node NAME을 입력한다. 위에서 kube-minion1을 설정했으므로, kube-minion1을 설정한다.또한 value: KUBIC 으로 설정된 부분을 value: UCIM 으로 변경한다. 수정할 파일은 /home/ucim/conf/grafana/dep_grafana.yml 파일이다.

[ucim@kube-master prometheus]$ cd ~/conf/grafana
[ucim@kube-master grafana]$ vi dep_grafana.yml
-
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: grafana
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      nodeSelector:
        kubernetes.io/hostname: kube-minion1
      volumes:
      - name: grafana-storage
        hostPath:
          path: /data/grafana
      containers:
      - name: grafana
        image: grafana/grafana
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /var/lib/grafana
          name: grafana-storage
        env:
        - name: GF_SECURITY_ADMIN_USER
          value: admin
        - name: GF_SECURITY_ADMIN_PASSWORD
          value: secret
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_NAME
          value: UCIM
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Viewer
        - name: GF_DASHBOARDS_JSON_ENABLED
          value: "true"
        - name: GF_DASHBOARDS_JSON_PATH
          value: /var/lib/grafana/db_grafana.json
---

/home/ucim/conf/grafana/db_grafana.json 파일에서 KUBIC으로 표시된 부분을 UCIM으로 변경한다.

[ucim@kube-master grafana]$ vi dep_grafana.yml
{
  "id": 1,
-  "title": "KUBIC Monitoring Center",
+  "title": "UCIM Monitoring Center",
-  "description": "Monitors KUBIC cluster using Prometheus. Shows overall cluster CPU / Memory / Filesystem usage as well as individual pod, containers, systemd services statistics. Uses cAdvisor metrics only.",
+  "description": "Monitors UCIM cluster using Prometheus. Shows overall cluster CPU / Memory / Filesystem usage as well as individual pod, containers, systemd services statistics. Uses cAdvisor metrics only.",
(..생략..)
}

마지막으로 /home/ucim/conf/grafana/svc_grafana.yml 파일에서 externalIPs 항목을 미니언(kube-minion1)의 외부IP(192.168.0.113)로 설정한다.

[ucim@kube-master grafana]$ vi svc_grafana.yml
apiVersion: v1
kind: Service
metadata:
  labels:
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: grafana
  #namespace: kube-system
spec:
  ports:
  - port: 3000
    targetPort: 3000
  selector:
    k8s-app: grafana
  externalIPs: ["192.168.0.113"]

이어 minion1에게 db_grafana.json을 파일을 복사하자. 복사하기전 디렉토리를 생성한다.

[root@kube-minion1 ~]# mkdir /data/grafana
[ucim@kube-master grafana]$ sudo scp db_grafana.json root@kube-minion1:/data/grafana
root@kube-minion1's password:
db_grafana.json                                                            100%   56KB  56.3KB/s   00:00

이어 pod를 생성하자.

[ucim@kube-master grafana]$ kubectl create -f svc_grafana.yml
service "grafana" created
[ucim@kube-master grafana]$ kubectl create -f dep_grafana.yml
deployment "grafana" created

이제 kubenetes의 3개의 친구들이 모두 구동되었다. 확인해보자.

[ucim@kube-master ~]$ kubectl get po --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
default       grafana-3502499752-tr2v1                1/1       Running   0          1m
default       prometheus-3803387545-66bd9             1/1       Running   0          8m
kube-system   kube-dns-911308988-7kq2p                3/3       Running   0          15m

grafana 웹 콘솔로 접속해서 설정을 이어가자. 브라우저를 열어 구동한 노드의 3000 port로 접근해보자. http://<minion_external_ip>:3000. 위에서는 kube-minion1에서 구동했으므로 http://192.168.0.113:3000이 되겠다. default id/pw는 admin/secrect 이다.

Data Source 항목으로 가서 data source 를 추가한다.

  • Name: prometheus
  • Type: prometheus

Http Settings에 아래와 같이 설정한다.

이제 Add한다. 그리고 Admin의 Preferences로 이동한다.

  • Name : UCIM
  • UI Theme : Light

위와 같이 변경 후 Update한다.

ucim cluster 등록

이제 설정이 거의 마무리 되어 간다. ucim cli를 위한 cliff를 설치한다.

(kubic) ucim@ucim-manager:~$ pip install -U setuptools
Collecting setuptools
  Downloading setuptools-36.0.1-py2.py3-none-any.whl (476kB)
    100% |████████████████████████████████| 481kB 47kB/s
Installing collected packages: setuptools
  Found existing installation: setuptools 5.5.1
    Uninstalling setuptools-5.5.1:
      Successfully uninstalled setuptools-5.5.1
Successfully installed setuptools-36.0.1
(kubic) ucim@ucim-manager:~$ pip install cliff
Collecting cliff
  Downloading cliff-2.7.0.tar.gz (66kB)
    100% |████████████████████████████████| 71kB 200kB/s
Collecting pbr!=2.1.0,>=2.0.0 (from cliff)
  Downloading pbr-3.0.1-py2.py3-none-any.whl (99kB)
    100% |████████████████████████████████| 102kB 165kB/s
Collecting cmd2>=0.6.7 (from cliff)
  Downloading cmd2-0.7.2.tar.gz (55kB)
    100% |████████████████████████████████| 61kB 83kB/s
Collecting PrettyTable<0.8,>=0.7.1 (from cliff)
  Downloading prettytable-0.7.2.zip
Collecting pyparsing>=2.1.0 (from cliff)
  Downloading pyparsing-2.2.0-py2.py3-none-any.whl (56kB)
    100% |████████████████████████████████| 61kB 30kB/s
Requirement already satisfied: six>=1.9.0 in ./.envs/kubic/lib/python3.4/site-packages (from cliff)
Collecting stevedore>=1.20.0 (from cliff)
  Downloading stevedore-1.22.0-py2.py3-none-any.whl
Requirement already satisfied: PyYAML>=3.10.0 in ./.envs/kubic/lib/python3.4/site-packages (from cliff)
Installing collected packages: pbr, pyparsing, cmd2, PrettyTable, stevedore, cliff
  Running setup.py install for cmd2 ... done
  Running setup.py install for PrettyTable ... done
  Running setup.py install for cliff ... done
Successfully installed PrettyTable-0.7.2 cliff-2.7.0 cmd2-0.7.2 pbr-3.0.1 pyparsing-2.2.0 stevedore-1.22.0
(kubic) ucim@ucim-manager:~$ cd ~/kubic/cliff
(kubic) ucim@ucim-manager:~/kubic/cliff$ python setup.py build
running build
running build_py
creating build
creating build/lib
creating build/lib/kb
copying kb/main.py -> build/lib/kb
copying kb/__init__.py -> build/lib/kb
copying kb/config.py -> build/lib/kb
creating build/lib/kb/account
copying kb/account/__init__.py -> build/lib/kb/account
copying kb/account/client.py -> build/lib/kb/account
running egg_info
creating kubic_cli.egg-info
writing requirements to kubic_cli.egg-info/requires.txt
writing kubic_cli.egg-info/PKG-INFO
writing namespace_packages to kubic_cli.egg-info/namespace_packages.txt
writing entry points to kubic_cli.egg-info/entry_points.txt
writing top-level names to kubic_cli.egg-info/top_level.txt
writing dependency_links to kubic_cli.egg-info/dependency_links.txt
writing manifest file 'kubic_cli.egg-info/SOURCES.txt'
reading manifest file 'kubic_cli.egg-info/SOURCES.txt'
writing manifest file 'kubic_cli.egg-info/SOURCES.txt'

api 서버를 설정한다.

(kubic) ucim@ucim-manager:~/kubic/cliff$ vi kb/config.py
- API_ENDPOINT = 'http://localhost:5000/api'
+ API_ENDPOINT = 'http://localhost:80/api'
TOKEN_FILE = '.token'
ADMIN_TOKEN_FILE = '.admin_token'

마지막으로 kubenetes cluster용 인증서를 가져와서 클러스터를 등록한다. 등록할때 아래의 4개의 인자값을 받는다.

  • endpoint : https://10.0.0.112:6443 (kube-master의 서비스 IP와 port)
  • token : kube-master의 /etc/ssl/k8s/tokens.csv에 기록된 token값
  • cacert : kube-master의 /etc/ssl/k8s/ca.pem 파일
  • desc : cluster의 설명 (생략가능)

따라서 token값을 확인하고, kube-master에서 ca.pem 파일을 복사해 놓자.

(kubic) ucim@ucim-manager:~/kubic/cliff$ scp root@kube-master:/etc/ssl/k8s/ca.pem .
(kubic) ucim@ucim-manager:~/kubic/cliff$ ssh root@kube-master cat /etc/ssl/k8s/tokens.csv | awk -F, '{print $1}'
yUXKc068mZTh2Tcj2vB10gHJ94xPBNo5

그리고 나서 아래의 명령을 실행한다.

(kubic) ucim@ucim-manager:~/kubic/cliff$ ./kb.sh
(main) admin auth admin
Pasword:
(main) cluster create --endpoint https://10.0.0.112:6443 --token <token_string> --cacert ca.pem --desc ucim-cluster ucim-cluster

만약 cluter 등록시 error가 난다면 아래와 같이 uwsgi를 재시작한뒤 web browser를 켜고 ucim manager로 접속해보자.

+-------+---------------------------------------+
| Field | Value                                 |
+-------+---------------------------------------+
| error | cannot create a cluster kubic-cluster |
+-------+---------------------------------------+
(kubic) ucim@ucim-manager~$ sudo systemctl restart uwsgi

그래도 문제가 있다면 아래와 같이 etcd에서 cluster를 지우고 cluster 재등록을 시도하자.

(kubic) ucim@ucim-manager: ~$ vi e.sh
etcdctl --endpoints https://10.0.0.112:2379 \
    --cert-file /etc/ssl/etcd/client.pem \
    --key-file /etc/ssl/etcd/client-key.pem \
    --ca-file /etc/ssl/etcd/ca.pem $@
(kubic) ucim@ucim-manager: ~$ chmod +x e.sh
(kubic) ucim@ucim-manager: ~$ ./e.sh rm -r /kubic/cluster

이제 모든 ucim의 설치가 완료되었다. web browser를 켜고 ucim manager로 접속해보자.