このページの翻訳:
- 日本語 (ja)
- English (en)
最近の更新
- 31 CentOS5 TLS1.2 created
- 2024.04.12 MySQL BIT Field created
- 04 ↷ 50_dialy:2024:04:04 から 50_dialy:2024:04:05 へページを名称変更しました。
最近の更新
文書の過去の版を表示しています。
hostname | IP |
---|---|
g-master | 172.16.0.103 |
g-work01 | 172.16.0.93 |
g-work02 | 172.16.0.153 |
g-work03 | 172.16.0.166 |
GlusterFSはシンプルに下記で作成
03 Ubuntu GlusterFS
glusterfs-endpoint.yaml
apiVersion: v1 kind: Endpoints metadata: name: glusterfs labels: storage.k8s.io/name: glusterfs storage.k8s.io/part-of: kubernetes-complete-reference storage.k8s.io/created-by: ssbostan subsets: - addresses: - ip: 172.16.0.93 hostname: g-work01 - ip: 172.16.0.153 hostname: g-work02 - ip: 172.16.0.166 hostname: g-work03 ports: - port: 1
kubectl create -f glusterfs-endpoint.yaml
glusterfs-service.yaml
kind: Service apiVersion: v1 metadata: name: glusterfs spec: ports: - port: 1
kubectl create -f glusterfs-service.yaml
k8s-volumeを作成している事を前提に説明。
# gluster volume status Status of volume: k8s-volume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick g-work01:/gluster/volume 49152 0 Y 33456 Brick g-work02:/gluster/volume 49152 0 Y 37407 Brick g-work03:/gluster/volume 49152 0 Y 31930 Self-heal Daemon on localhost N/A N/A Y 33477 Self-heal Daemon on g-work03 N/A N/A Y 31951 Self-heal Daemon on g-work02 N/A N/A Y 37428 Task Status of Volume k8s-volume
# mount.glusterfs localhost:k8s-volume /mnt/ # df /mnt Filesystem 1K-blocks Used Available Use% Mounted on localhost:k8s-volume 20961280 1968712 18992568 10% /mnt # mkdir /mnt/pv01
pv.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: k8s-volume-pv01 labels: name: k8s-volume-pv01 spec: accessModes: - ReadWriteMany capacity: storage: 1Gi glusterfs: endpoints: glusterfs path: k8s-volume/pv01 readOnly: false
kubectl create -f pv.yaml
pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: null labels: io.kompose.service: k8s-volume-pvc01 name: k8s-volume-pvc01 spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi selector: matchLabels: name: k8s-volume-pv01 status: {}
kubectl create -f pvc.yaml
# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE k8s-volume-pv01 1Gi RWX Retain Bound default/k8s-volume-pvc01 83s # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE k8s-volume-pvc01 Bound k8s-volume-pv01 1Gi RWX 61s
alpine-test.yaml
apiVersion: v1 kind: Pod metadata: name: alpine-test spec: containers: - image: alpine name: alpine command: ["tail", "-f", "/dev/null"] volumeMounts: - name: data-disk mountPath: /data volumes: - name: data-disk persistentVolumeClaim: claimName: k8s-volume-pvc01 terminationGracePeriodSeconds: 0
kubectl create -f alpine-test.yaml
# kubectl exec -it alpine-test -- sh / # df /data/ Filesystem 1K-blocks Used Available Use% Mounted on 172.16.0.93:k8s-volume/pv01 20961280 1968712 18992568 9% /data