700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > kubernetes上部署rook-ceph存储系统

kubernetes上部署rook-ceph存储系统

时间:2020-07-06 04:54:29

相关推荐

kubernetes上部署rook-ceph存储系统

文章目录

1. 简单说说为什么用rook2. rook-ceph部署2.1 环境2.2 Rook Operator部署2.3 Ceph集群创建2.3.1 标识osd节点2.3.2 yaml创建Ceph集群2.4 Rook toolbox验证ceph2.5 暴露Ceph2.5.1 暴露ceph dashboard2.5.2 暴露ceph monitor3. 配置rook-ceph4. kubernetes使用动态卷验证ceph5. 解决rook-ceph的csi-cephfs不能在flex的阿里云kubernetes上挂载问题5.1 创建cephfs-provisioner5.2 验证cephfs6. 小结

1. 简单说说为什么用rook

rook这里就不作详细介绍了,具体可以到官网查看。

说说为什么要在kubernetes上使用rook部署ceph集群。

众所周知,当前kubernetes为当前最佳云原生容器平台,随着pod在kubernetes节点内被释放,其容器数据也会被清除,即没有持久化存储数据能力。而ceph作为最好的开源存储之一,也是结合kubernetes最好的存储之一。利用kubernetes的调度功能,rook的自我扩展和自我修复能力,相互紧密配合。

2. rook-ceph部署

2.1 环境

注:

OSD至少3个节点,直接使用裸盘而不使用分区或者文件系统的方式性能最好。

2.2 Rook Operator部署

这里我们使用helm方式,helm的优势不必多说。

参考文档:

https://rook.io/docs/rook/v1.1/helm-operator.html

helm repo add rook-release https://charts.rook.io/releasehelm fetch --untar rook-release/rook-cephcd rook-cephvim values.yaml # 默认镜像被FW挡了,推荐 repository: ygqygq2/hyperkubehelm install --name rook-ceph --namespace rook-ceph --namespace ./

注:

根据kubernetes版本支持,可将values.yaml中设置enableFlexDriver: true

部署结果:

[root@linuxba-node1 rook-ceph]#kubectl get pod -n rook-cephNAME READY STATUS RESTARTS AGErook-ceph-operator-5bd7d67784-k9bq9 1/1Running 02d15hrook-discover-2f84s 1/1Running 02d14hrook-discover-j9xjk 1/1Running 02d14hrook-discover-nvnwn 1/1Running 02d14hrook-discover-nx4qf 1/1Running 02d14hrook-discover-wm6wp 1/1Running 02d14h

2.3 Ceph集群创建

2.3.1 标识osd节点

为了更好的管理控制osd,标识指定节点,便于pod只在这些节点调度。

kubectl label node node1 ceph-role=osd

2.3.2 yaml创建Ceph集群

vim rook-ceph-cluster.yaml

apiVersion: ceph.rook.io/v1kind: CephClustermetadata:name: rook-cephnamespace: rook-cephspec:cephVersion:image: ceph/ceph:v14.2.4-0917# 节点ceph目录,包含配置和logdataDirHostPath: /var/lib/rookmon:# Set the number of mons to be started. The number should be odd and between 1 and 9. # If not specified the default is set to 3 and allowMultiplePerNode is also set to true.count: 3# Enable (true) or disable (false) the placement of multiple mons on one node. Default is false.allowMultiplePerNode: falsemgr:modules:- name: pg_autoscalerenabled: truenetwork:# osd和mgr会使用主机网络,但是mon还是使用k8s网络,因此仍不能解决k8s外部连接问题# hostNetwork: true dashboard:enabled: true# cluster level storage configuration and selectionstorage:useAllNodes: falseuseAllDevices: falsedeviceFilter:location:config:metadataDevice:#databaseSizeMB: "1024" # this value can be removed for environments with normal sized disks (100 GB or larger)#journalSizeMB: "1024" # this value can be removed for environments with normal sized disks (20 GB or larger)# 节点列表,使用k8s中节点名nodes:- name: k8s1138026nodedevices: # specific devices to use for storage can be specified for each node- name: "vdb"config: # configuration can be specified at the node level which overrides the cluster level configstoreType: bluestore- name: k8s1138027nodedevices: # specific devices to use for storage can be specified for each node- name: "vdb"config: # configuration can be specified at the node level which overrides the cluster level configstoreType: bluestore- name: k8s1138031nodedevices: # specific devices to use for storage can be specified for each node- name: "vdb"config: # configuration can be specified at the node level which overrides the cluster level configstoreType: bluestore- name: k8s1138032nodedevices: # specific devices to use for storage can be specified for each node- name: "vdb"config: # configuration can be specified at the node level which overrides the cluster level configstoreType: bluestoreplacement:all:nodeAffinity:tolerations:mgr:nodeAffinity:tolerations:mon:nodeAffinity:tolerations:# 建议osd设置节点亲合性osd:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: ceph-roleoperator: Invalues:- osdtolerations:

kubectl apply -f rook-ceph-cluster.yaml

查看结果:

[root@linuxba-node1 ceph]# kubectl get pod -n rook-ceph -owideNAME READY STATUSRESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScsi-cephfsplugin-5dthf3/3Running020h 172.16.138.33 k8s1138033node <none> <none>csi-cephfsplugin-f2hwm3/3Running320h 172.16.138.27 k8s1138027node <none> <none>csi-cephfsplugin-hggkk3/3Running020h 172.16.138.26 k8s1138026node <none> <none>csi-cephfsplugin-pjh663/3Running020h 172.16.138.32 k8s1138032node <none> <none>csi-cephfsplugin-provisioner-78d9994b5d-9n4n7 4/4Running020h 10.244.2.80k8s1138031node <none> <none>csi-cephfsplugin-provisioner-78d9994b5d-tc898 4/4Running020h 10.244.3.81k8s1138032node <none> <none>csi-cephfsplugin-tgxsk3/3Running020h 172.16.138.31 k8s1138031node <none> <none>csi-rbdplugin-22bp9 3/3Running020h 172.16.138.26 k8s1138026node <none> <none>csi-rbdplugin-hf44c 3/3Running020h 172.16.138.32 k8s1138032node <none> <none>csi-rbdplugin-hpx7f 3/3Running020h 172.16.138.33 k8s1138033node <none> <none>csi-rbdplugin-kvx7x 3/3Running320h 172.16.138.27 k8s1138027node <none> <none>csi-rbdplugin-provisioner-74d6966958-srvqs5/5Running520h 10.244.1.111 k8s1138027node <none> <none>csi-rbdplugin-provisioner-74d6966958-vwmms5/5Running020h 10.244.3.80k8s1138032node <none> <none>csi-rbdplugin-tqt7b 3/3Running020h 172.16.138.31 k8s1138031node <none> <none>rook-ceph-mgr-a-855bf6985b-57vwp1/1Running119h 10.244.1.108 k8s1138027node <none> <none>rook-ceph-mon-a-7894d78d65-2zqwq1/1Running119h 10.244.1.110 k8s1138027node <none> <none>rook-ceph-mon-b-5bfc85976c-q5gdk1/1Running019h 10.244.4.178 k8s1138033node <none> <none>rook-ceph-mon-c-7576dc5fbb-kj8rv1/1Running019h 10.244.2.104 k8s1138031node <none> <none>rook-ceph-operator-5bd7d67784-5l5ss 1/1Running024h 10.244.2.13k8s1138031node <none> <none>rook-ceph-osd-0-d9c5686c7-tfjh9 1/1Running019h 10.244.0.35k8s1138026node <none> <none>rook-ceph-osd-1-9987ddd44-9hwvg 1/1Running019h 10.244.2.114 k8s1138031node <none> <none>rook-ceph-osd-2-f5df47f59-4zd8j 1/1Running119h 10.244.1.109 k8s1138027node <none> <none>rook-ceph-osd-3-5b7579d7dd-nfvgl1/1Running019h 10.244.3.90k8s1138032node <none> <none>rook-ceph-osd-prepare-k8s1138026node-cmk5j0/1Completed 019h 10.244.0.36k8s1138026node <none> <none>rook-ceph-osd-prepare-k8s1138027node-nbm820/1Completed 019h 10.244.1.103 k8s1138027node <none> <none>rook-ceph-osd-prepare-k8s1138031node-9gh870/1Completed 019h 10.244.2.115 k8s1138031node <none> <none>rook-ceph-osd-prepare-k8s1138032node-nj7vm0/1Completed 019h 10.244.3.87k8s1138032node <none> <none>rook-discover-4n25t 1/1Running025h 10.244.2.5k8s1138031node <none> <none>rook-discover-76h87 1/1Running025h 10.244.0.25k8s1138026node <none> <none>rook-discover-ghgnk 1/1Running025h 10.244.4.5k8s1138033node <none> <none>rook-discover-slvx8 1/1Running025h 10.244.3.5k8s1138032node <none> <none>rook-discover-tgb8v 0/1Error 025h <none>k8s1138027node <none> <none>[root@linuxba-node1 ceph]# kubectl get svc,ep -n rook-cephNAME TYPE CLUSTER-IPEXTERNAL-IP PORT(S) AGEservice/csi-cephfsplugin-metrics ClusterIP 10.96.36.5<none> 8080/TCP,8081/TCP 20hservice/csi-rbdplugin-metricsClusterIP 10.96.252.208 <none> 8080/TCP,8081/TCP 20hservice/rook-ceph-mgr ClusterIP 10.96.167.186 <none> 9283/TCP 19hservice/rook-ceph-mgr-dashboard ClusterIP 10.96.148.18 <none> 7000/TCP 19hservice/rook-ceph-mon-a ClusterIP 10.96.183.92 <none> 6789/TCP,3300/TCP 19hservice/rook-ceph-mon-b ClusterIP 10.96.201.107 <none> 6789/TCP,3300/TCP 19hservice/rook-ceph-mon-c ClusterIP 10.96.105.92 <none> 6789/TCP,3300/TCP 19hNAME ENDPOINTSAGEendpoints/ceph.rook.io-block <none> 25hendpoints/csi-cephfsplugin-metrics 10.244.2.80:9081,10.244.3.81:9081,172.16.138.26:9081 + 11 more... 20hendpoints/csi-rbdplugin-metrics10.244.1.111:9090,10.244.3.80:9090,172.16.138.26:9090 + 11 more... 20hendpoints/rook-ceph-mgr 10.244.1.108:9283 19hendpoints/rook-ceph-mgr-dashboard 10.244.1.108:7000 19hendpoints/rook-ceph-mon-a 10.244.1.110:3300,10.244.1.110:6789 19hendpoints/rook-ceph-mon-b 10.244.4.178:3300,10.244.4.178:6789 19hendpoints/rook-ceph-mon-c 10.244.2.104:3300,10.244.2.104:6789 19hendpoints/rook.io-block <none> 25h

2.4 Rook toolbox验证ceph

将Rook toolbox部署至kubernetes中,以下为部署yaml:

vim rook-ceph-toolbox.yam

apiVersion: apps/v1kind: Deploymentmetadata:name: rook-ceph-toolsnamespace: rook-cephlabels:app: rook-ceph-toolsspec:replicas: 1selector:matchLabels:app: rook-ceph-toolstemplate:metadata:labels:app: rook-ceph-toolsspec:dnsPolicy: ClusterFirstWithHostNetcontainers:- name: rook-ceph-toolsimage: rook/ceph:v1.1.0command: ["/tini"]args: ["-g", "--", "/usr/local/bin/toolbox.sh"]imagePullPolicy: IfNotPresentenv:- name: ROOK_ADMIN_SECRETvalueFrom:secretKeyRef:name: rook-ceph-monkey: admin-secretsecurityContext:privileged: truevolumeMounts:- mountPath: /devname: dev- mountPath: /sys/busname: sysbus- mountPath: /lib/modulesname: libmodules- name: mon-endpoint-volumemountPath: /etc/rook# if hostNetwork: false, the "rbd map" command hangs, see /rook/rook/issues/hostNetwork: truevolumes:- name: devhostPath:path: /dev- name: sysbushostPath:path: /sys/bus- name: libmoduleshostPath:path: /lib/modules- name: mon-endpoint-volumeconfigMap:name: rook-ceph-mon-endpointsitems:- key: datapath: mon-endpoints

# 启动rook-ceph-tools podkubectl create -f rook-ceph-toolbox.yaml# 等待toolbox pod启动完成kubectl -n rook-ceph get pod -l "app=rook-ceph-tools"# toolbox运行后,可进入kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash

进入toolbox后查看ceph相关状态:

# 使用ceph命令查看状态[root@linuxba-node5 /]# ceph -scluster:id:f3457013-139d-4dae-b380-fe86dc05dfaahealth: HEALTH_OKservices:mon: 3 daemons, quorum a,b,c (age 21h)mgr: a(active, since 21h)osd: 4 osds: 4 up (since 21h), 4 in (since 22h)data:pools: 0 pools, 0 pgsobjects: 0 objects, 0 Busage: 4.0 GiB used, 792 GiB / 796 GiB availpgs:[root@linuxba-node5 /]# ceph osd status+----+----------------+-------+-------+--------+---------+--------+---------+-----------+| id |host| used | avail | wr ops | wr data | rd ops | rd data | state |+----+----------------+-------+-------+--------+---------+--------+---------+-----------+| 0 | k8s1138026node | 1026M | 197G | 0 |0 | 0 |0 | exists,up || 1 | k8s1138031node | 1026M | 197G | 0 |0 | 0 |0 | exists,up || 2 | k8s1138027node | 1026M | 197G | 0 |0 | 0 |0 | exists,up || 3 | k8s1138032node | 1026M | 197G | 0 |0 | 0 |0 | exists,up |+----+----------------+-------+-------+--------+---------+--------+---------+-----------+[root@linuxba-node5 /]# ceph dfRAW STORAGE:CLASSSIZE AVAIL USED RAW USED%RAW USED hdd 796 GiB792 GiB10 MiB4.0 GiB0.50 TOTAL796 GiB792 GiB10 MiB4.0 GiB0.50 POOLS:POOLIDSTOREDOBJECTSUSED%USEDMAX AVAIL [root@linuxba-node5 /]# rados dfPOOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR total_objects 0total_used 4.0 GiBtotal_avail792 GiBtotal_space796 GiB[root@linuxba-node5 /]#

注:

自定义configmaprook-config-override中的config,会自动挂载到ceph pod中为/etc/ceph/ceph.conf,达到自定义配置目的。(推荐使用Ceph Cli管理,而不推荐这种方式)

apiVersion: v1kind: ConfigMapmetadata:name: rook-config-overridenamespace: rook-cephdata:config: |[global]osd crush update on start = falseosd pool default size = 2

2.5 暴露Ceph

ceph部署在kubernetes中,需要被外面访问,则需要暴露相关服务,比如dashboard、ceph monitor。

2.5.1 暴露ceph dashboard

推荐使用ingress方式暴露dashboard,其它方式参考kubernetes相关用法。

vim rook-ceph-dashboard-ingress.yaml

apiVersion: extensions/v1beta1kind: Ingressmetadata:annotations:# cert-manager.io/cluster-issuer: letsencrypt-prod# kubernetes.io/tls-acme: "true"name: rook-ceph-mgr-dashboardnamespace: rook-cephspec:rules:- host: ceph-http:paths:- backend:serviceName: rook-ceph-mgr-dashboardservicePort: 7000path: /tls:- hosts:- ceph-secretName: tls-ceph-dashboard-linuxba-com

获取dashboard密码:

kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo

用户名为admin,登录后:

2.5.2 暴露ceph monitor

这步只为验证kubernetes外部能否连接ceph monitor,而结果表明,确实不行。

新创建monitor的service,service type为LoadBalancer,以便k8s外部能使用,因为我使用的是阿里云kubernetes,而我又只想使用内网负载均衡,因此还要添加以下service:

vim rook-ceph-mon-svc.yaml

apiVersion: v1kind: Servicemetadata:annotations:service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: "intranet" labels:app: rook-ceph-monmon_cluster: rook-cephrook_cluster: rook-cephname: rook-ceph-monnamespace: rook-cephspec:ports:- name: msgr1port: 6789protocol: TCPtargetPort: 6789- name: msgr2port: 3300protocol: TCPtargetPort: 3300selector:app: rook-ceph-monmon_cluster: rook-cephrook_cluster: rook-cephsessionAffinity: Nonetype: LoadBalancer

注:

自建kubernetes推荐MetalLB提供LoadBalancer方式负载均衡。现在rook并不支持kubernetes外部连接ceph monitor。

3. 配置rook-ceph

配置ceph,达到kubernetes能使用动态卷管理。

vim rook-ceph-block-pool.yaml

apiVersion: ceph.rook.io/v1kind: CephBlockPoolmetadata:name: replicapoolnamespace: rook-cephspec:failureDomain: hostreplicated:size: 2# Sets up the CRUSH rule for the pool to distribute data only on the specified device class. # If left empty or unspecified, the pool will use the cluster’s default CRUSH root, which usually distributes data over all OSDs, regardless of their class.# deviceClass: hdd

vim rook-ceph-filesystem.yaml

apiVersion: ceph.rook.io/v1kind: CephFilesystemmetadata:name: cephfs-k8snamespace: rook-cephspec:metadataPool:replicated:size: 3dataPools:- replicated:size: 3metadataServer:activeCount: 1activeStandby: true

vim rook-ceph-storage-class.yaml

apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: ceph-rbdprovisioner: ceph.rook.io/blockparameters:blockPool: replicapool# The value of "clusterNamespace" MUST be the same as the one in which your rook cluster existclusterNamespace: rook-ceph# Specify the filesystem type of the volume. If not specified, it will use `ext4`.fstype: xfs# Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/reclaimPolicy: Retain# Optional, if you want to add dynamic resize for PVC. Works for Kubernetes 1.14+# For now only ext3, ext4, xfs resize support provided, like in Kubernetes itself.allowVolumeExpansion: true---# apiVersion: storage.k8s.io/v1# kind: StorageClass# metadata:# name: cephfs# # Change "rook-ceph" provisioner prefix to match the operator namespace if needed# provisioner: rook-ceph.cephfs.# parameters:# # clusterID is the namespace where operator is deployed.# clusterID: rook-ceph# # # CephFS filesystem name into which the volume shall be created# fsName: cephfs-k8s# # # Ceph pool into which the volume shall be created# # Required for provisionVolume: "true"# pool: cephfs-k8s-data0# # # Root path of an existing CephFS volume# # Required for provisionVolume: "false"# # rootPath: /absolute/path# # # The secrets contain Ceph admin credentials. These are generated automatically by the operator# # in the same namespace as the cluster.# csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner# csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph# csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node# csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph# # reclaimPolicy: Retain

进入toolbox查看结果:

[root@linuxba-node5 /]# ceph osd pool lsreplicapoolcephfs-k8s-metadatacephfs-k8s-data0[root@linuxba-node5 /]# ceph fs lsname: cephfs-k8s, metadata pool: cephfs-k8s-metadata, data pools: [cephfs-k8s-data0 ][root@linuxba-node5 /]#

4. kubernetes使用动态卷验证ceph

成功验证flex的ceph rbd。

[root@linuxba-node1 ceph]# kubectl get podNAME READY STATUS RESTARTS AGEcurl-66bdcf564-9hhrt 1/1Running 023hcurl-66bdcf564-ghq5s 1/1Running 023hcurl-66bdcf564-sbv8b 1/1Running 123hcurl-66bdcf564-t9gnc 1/1Running 023hcurl-66bdcf564-v5kfx 1/1Running 023hnginx-rbd-dy-67d8bbfcb6-vnctl 1/1Running 021s[root@linuxba-node1 ceph]# kubectl exec -it nginx-rbd-dy-67d8bbfcb6-vnctl /bin/bashroot@nginx-rbd-dy-67d8bbfcb6-vnctl:/# ps -efbash: ps: command not foundroot@nginx-rbd-dy-67d8bbfcb6-vnctl:/# df -hFilesystemSize Used Avail Use% Mounted onoverlay 197G 9.7G 179G 6% /tmpfs 64M0 64M 0% /devtmpfs 32G0 32G 0% /sys/fs/cgroup/dev/vda1 197G 9.7G 179G 6% /etc/hostsshm 64M0 64M 0% /dev/shm/dev/rbd01014M 33M 982M 4% /usr/share/nginx/htmltmpfs 32G 12K 32G 1% /run/secrets/kubernetes.io/serviceaccounttmpfs 32G0 32G 0% /proc/acpitmpfs 32G0 32G 0% /proc/scsitmpfs 32G0 32G 0% /sys/firmwareroot@nginx-rbd-dy-67d8bbfcb6-vnctl:/# cd /usr/share/nginx/html/root@nginx-rbd-dy-67d8bbfcb6-vnctl:/usr/share/nginx/html# lsroot@nginx-rbd-dy-67d8bbfcb6-vnctl:/usr/share/nginx/html# ls -latotal 4drwxr-xr-x 2 root root 6 Nov 5 08:47 .drwxr-xr-x 3 root root 4096 Oct 23 00:25 ..root@nginx-rbd-dy-67d8bbfcb6-vnctl:/usr/share/nginx/html# echo a > test.htmlroot@nginx-rbd-dy-67d8bbfcb6-vnctl:/usr/share/nginx/html# ls -ltotal 4-rw-r--r-- 1 root root 2 Nov 5 08:47 test.htmlroot@nginx-rbd-dy-67d8bbfcb6-vnctl:/usr/share/nginx/html#

而cephfs验证失败,pod一直处于等待挂载中,下文作详细说明。

5. 解决rook-ceph的csi-cephfs不能在flex的阿里云kubernetes上挂载问题

查看到使用cephfs pvc的pod所有节点的/var/log/message日志,

按日志提示,开始以为是权限不足:

kubectl get clusterrole system:node -oyaml

通过添加这个clusterrole的权限,报错仍旧一样。

才想起,创建cephfs storageclass时使用的是csi插件方式的。

而阿里云kubernetes只支持flex或者csi,我的集群选择的是使用flex插件方式的。

其flex插件方式下,集群节点kubelet参数,enable-controller-attach-detachfalse

若需要修改成csi方式,需要自行修改此参数为true

说干就干,进到ContainerCreating状态的pod所在节点,

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf,修改enable-controller-attach-detachtrue,然后systemctl daemon-reload && systemctl restart kubelet重启kubelet,结果发现POD已正常挂载了。

可以得出结论,确实是阿里云kubernetes的kubelet参数enable-controller-attach-detachfalse导致不能使用csi。

修改这个参数显然是不现实的,因为在购买阿里云托管版kubernetes时已经选择了flex插件方式,本来不需要维护kubelet,现在因为这个参数要维护所有节点的kubelet。那不修改kubelet参数,又有什么其它办法解决呢?

以前我用的是kubernetes-incubator/external-storage/ceph方式提供的provisioner,参考我以前的文章:

/ygqygq2/2163656

5.1 创建cephfs-provisioner

首先,将toolbox里的/etc/ceph/keyring内key后面的字符串,写到文件/tmp/ceph.client.admin.secret,做成secret,并启动cephfs-provisioner。

kubectl create secret generic ceph-admin-secret --from-file=/tmp/ceph.client.admin.secret --namespace=rook-cephkubectl apply -f cephfs/rbac/

等待启动成功

[root@linuxba-node1 ceph]# kubectl get pod -n rook-ceph|grep cephfs-provisionercephfs-provisioner-5f64bb484b-24bqf 1/1Running02m

然后创建cephfs storageclass。

vim cephfs-storageclass.yaml

kind: StorageClassapiVersion: storage.k8s.io/v1metadata:name: cephfsprovisioner: /cephfsreclaimPolicy: Retainparameters:# ceph monitor的svc IP 端口monitors: 10.96.201.107:6789,10.96.105.92:6789,10.96.183.92:6789adminId: adminadminSecretName: ceph-admin-secretadminSecretNamespace: "rook-ceph"claimRoot: /volumes/kubernetes

kubernetes节点还是要安装ceph-common和ceph-fuse。

使用阿里云的ceph yum源,cat /etc/yum.repos.d/ceph.repo

[Ceph]name=Ceph packages for $basearchbaseurl=http://mirrors./ceph/rpm-nautilus/el7/$basearchenabled=1gpgcheck=1type=rpm-mdgpgkey=http://mirrors./ceph/keys/release.asc[Ceph-noarch]name=Ceph noarch packagesbaseurl=http://mirrors./ceph/rpm-nautilus/el7/noarchenabled=1gpgcheck=1type=rpm-mdgpgkey=http://mirrors./ceph/keys/release.asc[ceph-source]name=Ceph source packagesbaseurl=http://mirrors./ceph/rpm-nautilus/el7/SRPMSenabled=1gpgcheck=1type=rpm-mdgpgkey=http://mirrors./ceph/keys/release.asc

5.2 验证cephfs

继续之前的测试,可以看到已经正常使用。

kubectl delete -f rook-ceph-cephfs-nginx.yaml -f rook-ceph-cephfs-pvc.yamlkubectl apply -f rook-ceph-cephfs-pvc.yamlkubectl apply -f rook-ceph-cephfs-nginx.yaml

[root@linuxba-node1 ceph]# kubectl get pod|grep cephfsnginx-cephfs-dy-5f47b4cbcf-txtf9 1/1Running 03m50s[root@linuxba-node1 ceph]# kubectl exec -it nginx-cephfs-dy-5f47b4cbcf-txtf9 /bin/bashroot@nginx-cephfs-dy-5f47b4cbcf-txtf9:/# df -hFilesystemSize Used Avail Use% Mounted onoverlay 197G 9.9G 179G 6% /tmpfs 64M0 64M 0% /devtmpfs 32G0 32G 0% /sys/fs/cgroup/dev/vda1 197G 9.9G 179G 6% /etc/hostsshm 64M0 64M 0% /dev/shmceph-fuse 251G0 251G 0% /usr/share/nginx/htmltmpfs 32G 12K 32G 1% /run/secrets/kubernetes.io/serviceaccounttmpfs 32G0 32G 0% /proc/acpitmpfs 32G0 32G 0% /proc/scsitmpfs 32G0 32G 0% /sys/firmwareroot@nginx-cephfs-dy-5f47b4cbcf-txtf9:/# echo test > /usr/share/nginx/html/test.html

6. 小结

Kubernetes外部并不能访问ceph monitor,由于这个局限,还是直接部署在机器上好得多。

rook-ceph可同时提供flex和csi驱动方式的rbd类型storageclass,而cephfs当前只支持csi驱动方式的storageclass,基于flex驱动的cephfs存储卷用法可参考示例:kube-registry.yaml

最后附上文中使用的相关Yaml文件:

/ygqygq2/kubernetes/tree/master/kubernetes-yaml/rook-ceph

参考资料:

[1] https://rook.io/docs/rook/v1.1/ceph-quickstart.html

[2] https://rook.io/docs/rook/v1.1/helm-operator.html

[3] https://rook.io/docs/rook/v1.1/ceph-toolbox.html

[4] https://rook.io/docs/rook/v1.1/ceph-advanced-configuration.html#custom-cephconf-settings

[5] https://rook.io/docs/rook/v1.1/ceph-pool-crd.html

[6] https://rook.io/docs/rook/v1.1/ceph-block.html

[7] https://rook.io/docs/rook/v1.1/ceph-filesystem.html

[8] /kubernetes-incubator/external-storage/tree/master/ceph

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。