Wiki

An introduction to kubespray

Bạn đang xem: kubespray/”>An introduction to kubespray Tại Website xmccomplex.com.vn

How to use kubespray – 12 Steps for Installing a Production Ready Kubernetes Cluster

Before we jump into the steps of installation, if you are familiar with Puppet, Chef and Ansible (https://github.com/kubernetes-incubator/kubespray) then kubespray is going to be the best choice to set up a Kubernetes cluster.

In this article, we will be going through 12 steps starting from setting up vagrant VMs till running the final ansible-playbook.

Disclaimer – If you are a beginner with kubernetes then i would highly recommend to go through the manual installation of kubernetes on Ubuntu or CentOS and for the same you can refer to –

Table of Content

Okay now lets try some kubespray and kubernetes –

Note – This article kubespray – 12 Steps for Installing a Production Ready Kubernetes Cluster has been tested and verified on the following latest release version –

  1. Kubespray – v2.16.2
  2. Ansible – v2.10.x
  3. Jnja – v2.11.2

If you want to upgrade your kubernetes cluster using Kubespray

Click here – Upgrade kubernetes using kubespray
With the latest version of kubespray v2.16.0, it is considered to be more stable for CentOS 8 and now kubespray supports Exoscale, Vsphere , UpCloud also

Step 1: Provision the VMs using Vagrant

First we need to provision the VMs using vagrant.

We will be setting up total 3 VMs (Virtual Machine) with its unique IP –

  1. Ansible Node (amaster) – 100.0.0.1 – 2 CPU – 2 GB Memory
  2. Kubernetes Master Node (kmaster) – 100.0.0.2 – 2 CPU – 2 GB Memory
  3. Kubernetes Worker Node (kworker) – 100.0.0.3 – 2 CPU – 2 GB Memory

Here is the Vagratnfile

1

Vagrant.configure

(

"2"

)

do

|

config

|

2

config.vm.define

"amaster"

do

|

amaster

|

3

amaster.vm.box_download_insecure

=

true

4

amaster.vm.box

=

"hashicorp/bionic64"

5

amaster.vm.network

"private_network"

, ip:

"100.0.0.1"

6

amaster.vm.hostname

=

"amaster"

7

amaster.vm.provider

"virtualbox"

do

|

v

|

8

v.name

=

"amaster"

9

v.memory

=

2048

10

v.cpus

=

2

11

end

12

end

13

14

config.vm.define

"kmaster"

do

|

kmaster

|

15

kmaster.vm.box_download_insecure

=

true

16

kmaster.vm.box

=

"hashicorp/bionic64"

17

kmaster.vm.network

"private_network"

, ip:

"100.0.0.2"

18

kmaster.vm.hostname

=

"kmaster"

19

kmaster.vm.provider

"virtualbox"

do

|

v

|

20

v.name

=

"kmaster"

21

v.memory

=

2048

22

v.cpus

=

2

23

end

24

end

25

26

config.vm.define

"kworker"

do

|

kworker

|

27

kworker.vm.box_download_insecure

=

true

28

kworker.vm.box

=

"hashicorp/bionic64"

29

kworker.vm.network

"private_network"

, ip:

"100.0.0.3"

30

kworker.vm.hostname

=

"kworker"

31

kworker.vm.provider

"virtualbox"

do

|

v

|

32

v.name

=

"kworker"

33

v.memory

=

2048

34

v.cpus

=

2

35

end

36

end

37

38

end

Start the vagrant box

1

vagrant up

After starting the vagrant box you need to update the /etc/hosts file on each node .i.e -amaster, kmaster, kworker

So run the following command on all the three nodes

1

sudo vi /etc/hosts

Add the following entries in the hosts files of each node (amaster, kmaster, kworker)

1

100.0.0.1 amaster.jhooq.com amaster

2

100.0.0.2 kmaster.jhooq.com kmaster

3

100.0.0.3 kworker.jhooq.com kworker

Your /etc/hosts file should look like this on all the three nodes .i.e. – amaster, kmaster, kworker

1

cat /etc/hosts

1

127.0.0.1 localhost

2

127.0.1.1 amaster amaster

3

4

# The following lines are desirable for IPv6 capable hosts

5

::1 localhost ip6-localhost ip6-loopback

6

ff02::1 ip6-allnodes

7

ff02::2 ip6-allrouters

8

100.0.0.1 amaster.jhooq.com amaster

9

100.0.0.2 kmaster.jhooq.com kmaster

10

100.0.0.3 kworker.jhooq.com kworker

Step 3: Generate SSH key for ansible (only need to run on ansible node .i.e. amaster)

To setup the kubespray smoothly we need to generate the SSH keys for the ansible master(amaster) nodes and copy the ssh keys to other nodes. So that you do not have to provide username and password everytime you login/ssh into the other nodes .i.e. – kmaster, kworker

Generate SSH key (during the ssh key generation it might will ask for passphrase so either you create a new passphrase or leave it empty)-

1

ssh-keygen -t rsa

1

Generating public/private rsa key pair.

2

Enter file in which to save the key

(

/home/vagrant/.ssh/id_rsa

)

:

3

Enter passphrase

(

empty

for

no passphrase

)

:

4

Enter same passphrase again:

5

Your identification has been saved in /home/vagrant/.ssh/id_rsa.

6

Your public key has been saved in /home/vagrant/.ssh/id_rsa.pub.

7

The key fingerprint is:

8

SHA256:LWGasiSDAqf8eY3pz5swa/nUl2rWc1IFgiPuqFTYsKs [email protected]

9

The key

'

s randomart image is:

10

+---

[

RSA 2048

]

----+

11

|

.

|

12

|

. . o . .

|

13

|

. .

=

. + . . .

|

14

|

o+ o

o

=

o .

|

15

|

+.o

=

=

S . .

|

16

|

. .*.++... ..

|

17

|

ooo*.o ..o.

|

18

|

E .oo* .oo+ .

|

19

|

.oo*+. +

|

20

+----

[

SHA256

]

-----+

Xem thêm :  11, những bài thơ 20 tháng 11 chúc mừng thầy cô giáo hay và ý n

Step 4: Copy SSH key to other nodes .i.e. – kmaster, kworker

In the step-3 we have generated the SSH keys, now we need to copy the SSH keys to other nodes .i.e. kmaster, kworker

Copy to kmaster node (During the ssh-copy-id it will ask for the other node password, so in case if you have not set any password then you can supply default password .i.e. vagrant) –

1

ssh-copy-id 100.0.0.2

1

/usr/bin/ssh-copy-id: INFO: Source of key

(

s

)

to be installed:

"/home/vagrant/.ssh/id_rsa.pub"

2

The authenticity of host

'100.0.0.2 (100.0.0.2)'

can

'

t be established.

3

ECDSA key fingerprint is SHA256:uY6GIjFdI9qTC4QYb980QRk+WblJF9cd5glr3SmmL+w.

Type “yes” when it asks for – Are you sure you want to continue connecting (yes/no)? yes

1

Are you sure you want to

continue

connecting

(

yes/no

)

? yes

1

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key

(

s

)

, to filter out any that are already installed

2

/usr/bin/ssh-copy-id: INFO:

1

key

(

s

)

remain to be installed --

if

you are prompted now it is to install the new keys

3

[email protected]

's password:

4

5

Number of key(s) added: 1

6

7

Now try logging into the machine, with: "ssh '

100.0.0.2

'

"

8

and check to make sure that only the key(s) you wanted were added.

Copy to kworker node (During the ssh-copy-id it will ask for the other node password, so in case if you have not set any password then you can supply default password .i.e. vagrant) –

1

ssh-copy-id 100.0.0.3

1

/usr/bin/ssh-copy-id: INFO: Source of key

(

s

)

to be installed:

"/home/vagrant/.ssh/id_rsa.pub"

2

The authenticity of host

'100.0.0.3 (100.0.0.3)'

can

'

t be established.

3

ECDSA key fingerprint is SHA256:uY6GIjFdI9qTC4QYb980QRk+WblJF9cd5glr3SmmL+w.

Type “yes” when it asks for – Are you sure you want to continue connecting (yes/no)? yes

1

Are you sure you want to

continue

connecting

(

yes/no

)

? yes

1

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key

(

s

)

, to filter out any that are already installed

2

/usr/bin/ssh-copy-id: INFO:

1

key

(

s

)

remain to be installed --

if

you are prompted now it is to install the new keys

3

[email protected]

's password:

4

5

Number of key(s) added: 1

6

7

Now try logging into the machine, with: "ssh '

100.0.0.3

'

"

8

and check to make sure that only the key(s) you wanted were added.

Step 5: Install python3-pip (only need to run on ansible node .i.e. amaster)

Before installing the python3-pip, you need to download and update the package list from the repository.

Run the following command(on all the nodes)

1

sudo apt-get update

Now you need to install the python3-pip, use the following installation command to install the python3-pip (only need to run on ansible node .i.e. amaster)

1

sudo apt install python3-pip

To proceed with the installation press “y”

1

Do you want to

continue

?

[

Y/n

]

y

After the installation verify the python and pip version

1

python -V

2

Python 2.7.15+

1

pip3 -V

2

pip 9.0.1 from /usr/lib/python3/dist-packages

(

python 3.6

)

Step 6: Clone the kubespray git repo (only need to run on ansible node .i.e. amaster)

In the next step we are going to clone the kubespray. Use the following git command to clone kubespray

1

git clone https://github.com/kubernetes-sigs/kubespray.git

1

Cloning into

'kubespray'

...

2

remote: Enumerating objects: 3,

done

.

3

remote: Counting objects: 100%

(

3/3

)

,

done

.

4

remote: Compressing objects: 100%

(

3/3

)

,

done

.

5

remote: Total

43626

(

delta 0

)

, reused

1

(

delta 0

)

, pack-reused

43623

6

Receiving objects: 100%

(

43626/43626

)

, 12.72 MiB

|

5.18 MiB/s,

done

.

7

Resolving deltas: 100%

(

24242/24242

)

,

done

.

Xem thêm :  199 hình xăm đẹp ở bắp chân cực chất mới nhất hiện nay

Step 7: Install kubespray package from “requirement.txt” (only need to run on ansible node .i.e. amaster)

Goto “kubespray” directory

1

cd

kubespray

Install the kubespray packages

1

sudo pip3 install -r requirements.txt

1

The directory

'/home/vagrant/.cache/pip/http'

or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo

's -H flag.

2

The directory '

/home/vagrant/.cache/pip

' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo'

s -H flag.

3

Collecting

ansible

==

2.9.6

(

from -r requirements.txt

(

line 1

))

4

Downloading https://files.pythonhosted.org/packages/ae/b7/c717363f767f7af33d90af9458d5f1e0960db9c2393a6c221c2ce97ad1aa/ansible-2.9.6.tar.gz

(

14.2MB

)

5

100%

|

████████████████████████████████

|

14.2MB 123kB/s

6

Collecting

jinja2

==

2.11.1

(

from -r requirements.txt

(

line 2

))

7

Downloading https://files.pythonhosted.org/packages/27/24/4f35961e5c669e96f6559760042a55b9bcfcdb82b9bdb3c8753dbe042e35/Jinja2-2.11.1-py2.py3-none-any.whl

(

126kB

)

8

100%

|

████████████████████████████████

|

133kB 4.1MB/s

9

Collecting

netaddr

==

0.7.19

(

from -r requirements.txt

(

line 3

))

10

Downloading https://files.pythonhosted.org/packages/ba/97/ce14451a9fd7bdb5a397abf99b24a1a6bb7a1a440b019bebd2e9a0dbec74/netaddr-0.7.19-py2.py3-none-any.whl

(

1.6MB

)

11

100%

|

████████████████████████████████

|

1.6MB 954kB/s

12

Collecting

pbr

==

5.4.4

(

from -r requirements.txt

(

line 4

))

13

Downloading https://files.pythonhosted.org/packages/7a/db/a968fd7beb9fe06901c1841cb25c9ccb666ca1b9a19b114d1bbedf1126fc/pbr-5.4.4-py2.py3-none-any.whl

(

110kB

)

14

100%

|

████████████████████████████████

|

112kB 7.0MB/s

15

Collecting

hvac

==

0.10.0

(

from -r requirements.txt

(

line 5

))

16

Downloading https://files.pythonhosted.org/packages/8d/d7/63e63936792a4c85bea3884003b6d502a040242da2d72db01b0ada4bdb28/hvac-0.10.0-py2.py3-none-any.whl

(

116kB

)

17

100%

|

████████████████████████████████

|

122kB 6.0MB/s

18

Collecting

jmespath

==

0.9.5

(

from -r requirements.txt

(

line 6

))

19

Downloading https://files.pythonhosted.org/packages/a3/43/1e939e1fcd87b827fe192d0c9fc25b48c5b3368902bfb913de7754b0dc03/jmespath-0.9.5-py2.py3-none-any.whl

20

Collecting ruamel.yaml

==

0.16.10

(

from -r requirements.txt

(

line 7

))

21

Downloading https://files.pythonhosted.org/packages/a6/92/59af3e38227b9cc14520bf1e59516d99ceca53e3b8448094248171e9432b/ruamel.yaml-0.16.10-py2.py3-none-any.whl

(

111kB

)

22

100%

|

████████████████████████████████

|

112kB 5.6MB/s

23

Requirement already satisfied: PyYAML in /usr/lib/python3/dist-packages

(

from

ansible

==

2.9.6->-r requirements.txt

(

line 1

))

24

Requirement already satisfied: cryptography in /usr/lib/python3/dist-packages

(

from

ansible

==

2.9.6->-r requirements.txt

(

line 1

))

25

Collecting MarkupSafe>

=

0.23

(

from

jinja2

==

2.11.1->-r requirements.txt

(

line 2

))

26

Downloading https://files.pythonhosted.org/packages/b2/5f/23e0023be6bb885d00ffbefad2942bc51a620328ee910f64abe5a8d18dd1/MarkupSafe-1.1.1-cp36-cp36m-manylinux1_x86_64.whl

27

Requirement already satisfied: six>

=

1.5.0 in /usr/lib/python3/dist-packages

(

from

hvac

==

0.10.0->-r requirements.txt

(

line 5

))

28

Collecting requests>

=

2.21.0

(

from

hvac

==

0.10.0->-r requirements.txt

(

line 5

))

29

Downloading https://files.pythonhosted.org/packages/1a/70/1935c770cb3be6e3a8b78ced23d7e0f3b187f5cbfab4749523ed65d7c9b1/requests-2.23.0-py2.py3-none-any.whl

(

58kB

)

30

100%

|

████████████████████████████████

|

61kB 6.9MB/s

31

Collecting ruamel.yaml.clib>

=

0.1.2

;

platform_python_implementation

==

"CPython"

and python_version <

"3.9"

(

from

32

ruamel.yaml

==

0.16.10->-r requirements.txt

(

line 7

))

33

Downloading https://files.pythonhosted.org/packages/53/77/4bcd63f362bcb6c8f4f06253c11f9772f64189bf08cf3f40c5ccbda9e561/ruamel.yaml.clib-0.2.0-cp36-cp36m-manylinux1_x86_64.whl

(

548kB

)

34

100%

|

████████████████████████████████

|

552kB 2.5MB/s

35

Requirement already satisfied: certifi>

=

2017.4.17 in /usr/lib/python3/dist-packages

(

from requests>

=

2.21.0->hvac

==

0.10.0->-r requirements.txt

(

line 5

))

36

Requirement already satisfied: idna<3,>

=

2.5 in /usr/lib/python3/dist-packages

(

from requests>

=

2.21.0->hvac

==

0.10.0

37

->-r requirements.txt

(

line 5

))

38

Requirement already satisfied: urllib3!

=

1.25.0,!

=

1.25.1,<1.26,>

=

1.21.1 in /usr/lib/python3/dist-packages

(

from

39

requests>

=

2.21.0->hvac

==

0.10.0->-r requirements.txt

(

line 5

))

40

Requirement already satisfied: chardet<4,>

=

3.0.2 in /usr/lib/python3/dist-packages

(

from requests>

=

2.21.0->hvac

==

0.10

41

.0->-r requirements.txt

(

line 5

))

42

Installing collected packages: MarkupSafe, jinja2, ansible, netaddr, pbr, requests, hvac, jmespath, ruamel.yaml.clib, ruamel.yaml

43

Running setup.py install

for

ansible ...

done

44

Found existing installation: requests 2.18.4

45

Not uninstalling requests at /usr/lib/python3/dist-packages, outside environment /usr

46

Successfully installed MarkupSafe-1.1.1 ansible-2.9.6 hvac-0.10.0 jinja2-2.11.1 jmespath-0.9.5 netaddr-0.7.19 pbr-5.4.4 requests-2.23.0 ruamel.yaml-0.16.10 ruamel.yaml.clib-0.2.0

47

Step 8: Copy inventory file to current users (only need to run on ansible node .i.e. amaster)

Now we need to copy the inventory file to current user using the following command

1

cp -rfp inventory/sample inventory/mycluster

Step 9: Prepare host.yml for kubespray (only need to run on ansible node .i.e. amaster)

This step is little trivial because we need to update host.yml with the nodes IP.

Now we are going to declare a variable “IPS” for storing the IP address of other nodes .i.e. kmaster(100.0.0.2), kworker(100.0.0.3)

1

declare

-a

IPS

=(

100.0.0.2 100.0.0.3

)

1

CONFIG_FILE

=

inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py

${

IPS

[@]

}

1

DEBUG: Adding group all

2

DEBUG: Adding group kube-master

3

DEBUG: Adding group kube-node

4

DEBUG: Adding group etcd

5

DEBUG: Adding group k8s-cluster

6

DEBUG: Adding group calico-rr

7

DEBUG: adding host node1 to group all

8

DEBUG: adding host node2 to group all

9

DEBUG: adding host node1 to group etcd

10

DEBUG: adding host node1 to group kube-master

11

DEBUG: adding host node2 to group kube-master

12

DEBUG: adding host node1 to group kube-node

13

DEBUG: adding host node2 to group kube-node

After running the above commands do verify the hosts.yml and it should be like –

1

vi inventory/mycluster/hosts.yml

1

all

:

2

hosts

:

3

node1

:

4

ansible_host

:

100.0.0.2

5

ip

:

100.0.0.2

6

access_ip

:

100.0.0.2

7

node2

:

8

ansible_host

:

100.0.0.3

9

ip

:

100.0.0.3

10

access_ip

:

100.0.0.3

11

children

:

12

kube-master

:

13

hosts

:

14

node1

:

15

node2

:

16

kube-node

:

17

hosts

:

18

node1

:

19

node2

:

20

etcd

:

21

hosts

:

22

node1

:

23

k8s-cluster

:

24

children

:

25

kube-master

:

26

kube-node

:

27

calico-rr

:

28

hosts

:

{}

Step 10: Run the ansible-playbook on ansible node .i.e. – amaster (only need to run on ansible node .i.e. amaster)

Now we have done all the prerequisite for running the ansible-playbook.

Use the following ansible-playbook command to begin the installation

1

ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user

=

root cluster.yml

Running ansible playbook takes little time because it depends on the network bandwidth also.

During playbook run if you face an error “ansible_memtotal_mb >= minimal_master_memory_mb”, please refer to – How to fix – ansible_memtotal_mb >= minimal_master_memory_mb

Step 11: Install kubectl on kubernetes master .i.e. – kmaster (only need to run on kuebernets node .i.e. kmaster)

Now you need to log into the kubernetes master .i.e. kmaster and download the kubectl onto it.

1

curl -LO https://storage.googleapis.com/kubernetes-release/release/

`

curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt

`

/bin/linux/amd64/kubectl

1

% Total % Received % Xferd Average Speed Time Time Time Current

2

Dload Upload Total Spent Left Speed

3

100

41.9M

100

41.9M

5893k

0:00:07 0:00:07 --:--:-- 5962k

Now we need to copy the admin.conf file to .kube

1

sudo cp /etc/kubernetes/admin.conf /home/vagrant/config

1

mkdir .kube

1

mv config .kube/

1

sudo chown

$(

id -u

)

:

$(

id -g

)

$HOME

/.kube/config

Check the kubectl version after installation

1

kubectl version

1

Client Version: version.Info

{

Major:

"1"

, Minor:

"18"

, GitVersion:

"v1.18.2"

, GitCommit:

"52c56ce7a8272c798dbc29846288d7cd9fbae032"

, GitTreeState:

"clean"

, BuildDate:

"2020-04-16T11:56:40Z"

, GoVersion:

"go1.13.9"

, Compiler:

"gc"

, Platform:

"linux/amd64"

}

2

Server Version: version.Info

{

Major:

"1"

, Minor:

"18"

, GitVersion:

"v1.18.2"

, GitCommit:

"52c56ce7a8272c798dbc29846288d7cd9fbae032"

, GitTreeState:

"clean"

, BuildDate:

"2020-04-16T11:48:36Z"

, GoVersion:

"go1.13.9"

, Compiler:

"gc"

, Platform:

"linux/amd64"

}

Step 12: Verify the kubernetes nodes

Now we have done all the required steps for the installing kubernetes using kubespray.

Lets check the nodes status in out final step

1

kubectl get nodes

1

NAME STATUS ROLES AGE VERSION

2

node1 Ready master 13m v1.18.2

3

node2 Ready master 13m v1.18.2


[ Kube 65.1 ] Kubespray – Kubernetes cluster provisioning


In this video I will show you how to use Kubespray to provision a production ready Kubernetes cluster with multiple control planes. Kubespray uses Ansible for automating the provisioning tasks. You could use physical or virtual machines for this demo.
Kubespray:
https://github.com/kubernetessigs/kubespray
My Github repo:
https://github.com/justmeandopensource/kubernetes
Learn Kubernetes Playlist:
https://www.youtube.com/playlist?list=PL34sAs7_26wNBRWM6BDhnonoA5FMERax0
Kubernetes Provisioning Playlist:
https://www.youtube.com/playlist?list=PL34sAs7_26wODP4j6owN36VgKbACgkT
Hope you enjoyed this video. Please share it with your friends and don’t forget to subscribe to my channel. For any questions/issues/feedback, please leave me a comment and I will get back to you at the earliest.
Thanks for watching this video.
If you wish to support me:
https://www.paypal.com/cgibin/webscr?cmd=_sxclick\u0026hosted_button_id=F8FN37PAD629Y\u0026source=url
kubespray kubernetes justmekubernetes learnkubernetes

Xem thêm bài viết thuộc chuyên mục: Wiki

Related Articles

Back to top button