このページの翻訳:
- 日本語 (ja)
- English (en)
最近の更新
Tag Cloud
このページへのアクセス
今日: 13 / 昨日: 2
総計: 1035
- Dokuwiki.fl8.jp(457)
- FreeBSD カーネル再構築(20)
- 05 rsync(20)
- 13 CentOS6メール設定(20)
最近の更新
このページへのアクセス
今日: 13 / 昨日: 2
総計: 1035
hostname | IP |
---|---|
g-master | 172.16.0.103 |
g-work01 | 172.16.0.93 |
g-work02 | 172.16.0.153 |
g-work03 | 172.16.0.166 |
GlusterFSはシンプルに下記で作成
03 Ubuntu GlusterFS
glusterfs-endpoint.yaml
apiVersion: v1 kind: Endpoints metadata: name: glusterfs labels: storage.k8s.io/name: glusterfs storage.k8s.io/part-of: kubernetes-complete-reference storage.k8s.io/created-by: ssbostan subsets: - addresses: - ip: 172.16.0.93 hostname: g-work01 - ip: 172.16.0.153 hostname: g-work02 - ip: 172.16.0.166 hostname: g-work03 ports: - port: 1
kubectl create -f glusterfs-endpoint.yaml
glusterfs-service.yaml
kind: Service apiVersion: v1 metadata: name: glusterfs spec: ports: - port: 1
kubectl create -f glusterfs-service.yaml
k8s-volumeを作成している事を前提に説明。
# gluster volume status Status of volume: k8s-volume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick g-work01:/gluster/volume 49152 0 Y 33456 Brick g-work02:/gluster/volume 49152 0 Y 37407 Brick g-work03:/gluster/volume 49152 0 Y 31930 Self-heal Daemon on localhost N/A N/A Y 33477 Self-heal Daemon on g-work03 N/A N/A Y 31951 Self-heal Daemon on g-work02 N/A N/A Y 37428 Task Status of Volume k8s-volume
# mount.glusterfs localhost:k8s-volume /mnt/ # df /mnt Filesystem 1K-blocks Used Available Use% Mounted on localhost:k8s-volume 20961280 1968712 18992568 10% /mnt # mkdir /mnt/pv01
pv.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: k8s-volume-pv01 labels: name: k8s-volume-pv01 spec: accessModes: - ReadWriteMany capacity: storage: 1Gi glusterfs: endpoints: glusterfs path: k8s-volume/pv01 readOnly: false
kubectl create -f pv.yaml
pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: null labels: io.kompose.service: k8s-volume-pvc01 name: k8s-volume-pvc01 spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi selector: matchLabels: name: k8s-volume-pv01 status: {}
kubectl create -f pvc.yaml
# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE k8s-volume-pv01 1Gi RWX Retain Bound default/k8s-volume-pvc01 83s # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE k8s-volume-pvc01 Bound k8s-volume-pv01 1Gi RWX 61s
alpine-test.yaml
apiVersion: v1 kind: Pod metadata: name: alpine-test spec: containers: - image: alpine name: alpine command: ["tail", "-f", "/dev/null"] volumeMounts: - name: data-disk mountPath: /data volumes: - name: data-disk persistentVolumeClaim: claimName: k8s-volume-pvc01 terminationGracePeriodSeconds: 0
kubectl create -f alpine-test.yaml
# kubectl exec -it alpine-test -- sh / # df /data/ Filesystem 1K-blocks Used Available Use% Mounted on 172.16.0.93:k8s-volume/pv01 20961280 1968712 18992568 9% /data
g-work01 172.16.0.93を落としてみた。
# kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP g-master Ready control-plane,master 5h41m v1.23.5 172.16.0.103 g-work01 NotReady <none> 5h39m v1.23.5 172.16.0.93 g-work02 Ready <none> 5h39m v1.23.5 172.16.0.153 g-work03 Ready <none> 4h5m v1.23.5 172.16.0.166
落としているので、当然Pingは通らない
# ping 172.16.0.93 -c 3 PING 172.16.0.93 (172.16.0.93) 56(84) bytes of data. From 172.16.0.103 icmp_seq=1 Destination Host Unreachable From 172.16.0.103 icmp_seq=2 Destination Host Unreachable From 172.16.0.103 icmp_seq=3 Destination Host Unreachable --- 172.16.0.93 ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2025ms
GlusterFSのクラスタからも外れている
# gluster vol status Status of volume: k8s-volume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick g-work02:/gluster/volume 49152 0 Y 37407 Brick g-work03:/gluster/volume 49152 0 Y 31930 Self-heal Daemon on localhost N/A N/A Y 37428 Self-heal Daemon on g-work03 N/A N/A Y 31951 Task Status of Volume k8s-volume ------------------------------------------------------------------------------ There are no active volume tasks
対象のIPにはPingできないのにちゃんとディスクは利用可能
/ # df /data Filesystem 1K-blocks Used Available Use% Mounted on 172.16.0.93:k8s-volume/pv01 20961280 3537200 17424080 17% /data /data # dd if=/dev/zero of=TEST bs=1M count=1024 1024+0 records in 1024+0 records out
ちゃんと落ちている間に更新されたファイルが同期されている
# ll /gluster/volume/pv01/ total 3847048 drwxr-xr-x 2 root root 88 Apr 19 08:05 ./ drwxr-xr-x 8 root root 90 Apr 19 07:31 ../ -rw-r--r-- 2 root root 1073741824 Apr 19 08:12 TEST