====== 18 Kubernetes GlusterFS ======
^ hostname ^ IP ^
|g-master|172.16.0.103|
|g-work01|172.16.0.93|
|g-work02|172.16.0.153|
|g-work03|172.16.0.166|
===== 1.GlusterFS Install =====
GlusterFSはシンプルに下記で作成
[[01_linux:13_storage:03_glusterfs]]
===== 2.EndPoint設定 =====
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs
labels:
storage.k8s.io/name: glusterfs
storage.k8s.io/part-of: kubernetes-complete-reference
storage.k8s.io/created-by: ssbostan
subsets:
- addresses:
- ip: 172.16.0.93
hostname: g-work01
- ip: 172.16.0.153
hostname: g-work02
- ip: 172.16.0.166
hostname: g-work03
ports:
- port: 1
==== 作成 ====
kubectl create -f glusterfs-endpoint.yaml
===== 3.Serviceの設定 =====
kind: Service
apiVersion: v1
metadata:
name: glusterfs
spec:
ports:
- port: 1
==== 作成 ====
kubectl create -f glusterfs-service.yaml
===== 4.GlusterFSでボリューム用意 =====
k8s-volumeを作成している事を前提に説明。
# gluster volume status
Status of volume: k8s-volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick g-work01:/gluster/volume 49152 0 Y 33456
Brick g-work02:/gluster/volume 49152 0 Y 37407
Brick g-work03:/gluster/volume 49152 0 Y 31930
Self-heal Daemon on localhost N/A N/A Y 33477
Self-heal Daemon on g-work03 N/A N/A Y 31951
Self-heal Daemon on g-work02 N/A N/A Y 37428
Task Status of Volume k8s-volume
==== PV用のディレクトリを用意しておく ====
# mount.glusterfs localhost:k8s-volume /mnt/
# df /mnt
Filesystem 1K-blocks Used Available Use% Mounted on
localhost:k8s-volume 20961280 1968712 18992568 10% /mnt
# mkdir /mnt/pv01
===== 5.PV =====
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-volume-pv01
labels:
name: k8s-volume-pv01
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
glusterfs:
endpoints: glusterfs
path: k8s-volume/pv01
readOnly: false
==== 作成 ====
kubectl create -f pv.yaml
===== 6.PVC =====
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: k8s-volume-pvc01
name: k8s-volume-pvc01
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
name: k8s-volume-pv01
status: {}
==== 作成 ====
kubectl create -f pvc.yaml
===== 7.確認 =====
==== PVとPVCができている事を確認 ====
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
k8s-volume-pv01 1Gi RWX Retain Bound default/k8s-volume-pvc01 83s
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
k8s-volume-pvc01 Bound k8s-volume-pv01 1Gi RWX 61s
==== Podに割り当ててみる ====
apiVersion: v1
kind: Pod
metadata:
name: alpine-test
spec:
containers:
- image: alpine
name: alpine
command: ["tail", "-f", "/dev/null"]
volumeMounts:
- name: data-disk
mountPath: /data
volumes:
- name: data-disk
persistentVolumeClaim:
claimName: k8s-volume-pvc01
terminationGracePeriodSeconds: 0
=== Pod作成 ===
kubectl create -f alpine-test.yaml
=== Pod内で確認 ===
# kubectl exec -it alpine-test -- sh
/ # df /data/
Filesystem 1K-blocks Used Available Use% Mounted on
172.16.0.93:k8s-volume/pv01
20961280 1968712 18992568 9% /data
===== 8.障害テスト =====
==== 1台落としてみる ====
g-work01 172.16.0.93を落としてみた。
# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP
g-master Ready control-plane,master 5h41m v1.23.5 172.16.0.103
g-work01 NotReady 5h39m v1.23.5 172.16.0.93
g-work02 Ready 5h39m v1.23.5 172.16.0.153
g-work03 Ready 4h5m v1.23.5 172.16.0.166
==== 他のworkerからはPingは通らない ====
落としているので、当然Pingは通らない
# ping 172.16.0.93 -c 3
PING 172.16.0.93 (172.16.0.93) 56(84) bytes of data.
From 172.16.0.103 icmp_seq=1 Destination Host Unreachable
From 172.16.0.103 icmp_seq=2 Destination Host Unreachable
From 172.16.0.103 icmp_seq=3 Destination Host Unreachable
--- 172.16.0.93 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2025ms
GlusterFSのクラスタからも外れている
# gluster vol status
Status of volume: k8s-volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick g-work02:/gluster/volume 49152 0 Y 37407
Brick g-work03:/gluster/volume 49152 0 Y 31930
Self-heal Daemon on localhost N/A N/A Y 37428
Self-heal Daemon on g-work03 N/A N/A Y 31951
Task Status of Volume k8s-volume
------------------------------------------------------------------------------
There are no active volume tasks
==== それでも大丈夫 ====
対象のIPにはPingできないのにちゃんとディスクは利用可能
/ # df /data
Filesystem 1K-blocks Used Available Use% Mounted on
172.16.0.93:k8s-volume/pv01
20961280 3537200 17424080 17% /data
/data # dd if=/dev/zero of=TEST bs=1M count=1024
1024+0 records in
1024+0 records out
==== 落としたノード起動後 ====
ちゃんと落ちている間に更新されたファイルが同期されている
# ll /gluster/volume/pv01/
total 3847048
drwxr-xr-x 2 root root 88 Apr 19 08:05 ./
drwxr-xr-x 8 root root 90 Apr 19 07:31 ../
-rw-r--r-- 2 root root 1073741824 Apr 19 08:12 TEST
{{tag>KuberNetes GlusterFS}}