700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > Ceph分布式存储系统-性能测试与优化

Ceph分布式存储系统-性能测试与优化

时间:2021-12-07 13:34:45

相关推荐

Ceph分布式存储系统-性能测试与优化

测试环境

部署方案:整个Ceph Cluster使用4台ECS,均在同一VPC中,结构如图:

以下是 Ceph 的测试环境,说明如下:

Ceph 采用 10.2.10 版本,安装于 CentOS 7.4 版本中;系统为初始安装,没有调优。每个 OSD 存储服务器都是4核8GB,挂载1块300G高效云盘(非SSD硬盘);操作系统和OSD存储均用同一个磁盘。

[root@node1 ~]# ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-6 0 rack test-bucket-5 0 rack demo-1 0.86458 root default-2 0.28819host node20 0.28819 osd.0 up 1.000001.00000-3 0.28819host node31 0.28819 osd.1 up 1.000001.00000-4 0.28819host node42 0.28819 osd.2 up 1.000001.00000

使用 Test pool,此池为 64 个 PGs,数据存三份;

[root@node1 ~]# ceph osd pool create test 64 64pool 'test' created[root@node1 ~]# ceph osd pool get test sizesize: 3[root@node1 ~]# ceph osd pool get test pg_numpg_num: 64

Ceph osd 采用 xfs 文件系统(若使用 brtf 文件系统读写性能将翻 2 倍,但brtf不建议在生产环境使用);Ceph 系统中的Block采用默认安装,为 64K;性能测试客户端运行在node1上,在同一VPC下使用同一网段访问 Ceph 存贮系统进行数据读写;

本次测试中,发起流量的客户端位于Ceph Cluster中,故网络延时较小,真正生产环境中还需要考虑网络瓶颈。生产环境的网络访问图如下:

磁盘性能测试

测试磁盘写吞吐量

使用dd命令对磁盘进行标准写测试。使用一下命令行读取和写入文件,记住添加oflag参数以绕过磁盘页面缓存。

node1:

[root@node1 ~]# dd if=/dev/zero of=here bs=1G count=1 oflag=direct记录了1+0 的读入记录了1+0 的写出1073741824字节(1.1 GB)已复制,15.466 秒,69.4 MB/秒

node2:

[root@node2 ~]# dd if=/dev/zero of=here bs=1G count=1 oflag=direct记录了1+0 的读入记录了1+0 的写出1073741824字节(1.1 GB)已复制,13.6518 秒,78.7 MB/秒

node3:

[root@node3 ~]# dd if=/dev/zero of=here bs=1G count=1 oflag=direct记录了1+0 的读入记录了1+0 的写出1073741824字节(1.1 GB)已复制,13.6466 秒,78.7 MB/秒

node4:

[root@node4 ~]# dd if=/dev/zero of=here bs=1G count=1 oflag=direct记录了1+0 的读入记录了1+0 的写出1073741824字节(1.1 GB)已复制,13.6585 秒,78.6 MB/秒

可以看出,除了node1节点外,磁盘吞吐量在 78 MB/s 左右。node1上没有部署osd,最终不作为ceph的读写性能评判参考。

测试磁盘写延迟

使用dd命令,每次写512字节,连续写1万次。

node1:

[root@node1 test]# dd if=/dev/zero of=512 bs=512 count=10000 oflag=direct记录了10000+0 的读入记录了10000+0 的写出5120000字节(5.1 MB)已复制,6.06715 秒,844 kB/秒

node2:

[root@node2 test]# dd if=/dev/zero of=512 bs=512 count=10000 oflag=direct记录了10000+0 的读入记录了10000+0 的写出5120000字节(5.1 MB)已复制,4.12061 秒,1.2 MB/秒

node3:

[root@node3 test]# dd if=/dev/zero of=512 bs=512 count=10000 oflag=direct记录了10000+0 的读入记录了10000+0 的写出5120000字节(5.1 MB)已复制,3.88562 秒,1.3 MB/秒

node4:

[root@node4 test]# dd if=/dev/zero of=512 bs=512 count=10000 oflag=direct记录了10000+0 的读入记录了10000+0 的写出5120000字节(5.1 MB)已复制,3.60598 秒,1.4 MB/秒

平均耗时4秒,平均速度1.3MB/s。

集群网络I/O测试

由于客户端访问都是通过rgw访问各个osd(文件存储服务除外),主要测试rgw节点到各个osd节点的网络性能I/O。

rgw到osd.0

在osd.0节点上使用nc监听17480端口的网络I/O请求:

[root@node2 ~]# nc -v -l -n 17480 > /dev/nullNcat: Version 6.40 ( /ncat )Ncat: Listening on :::17480Ncat: Listening on 0.0.0.0:17480Ncat: Connection from 192.168.0.97.Ncat: Connection from 192.168.0.97:33644.

在rgw节点上发起网络I/O请求:

[root@node2 ~]# time dd if=/dev/zero | nc -v -n 192.168.0.97 17480Ncat: Version 6.40 ( /ncat )Ncat: Connected to 192.168.0.97:17480.^C记录了121182456+0 的读入记录了121182455+0 的写出62045416960字节(62 GB)已复制,413.154 秒,150 MB/秒real 6m53.156suser 5m54.626ssys 7m51.485s

网络I/O总流量62GB,耗时413.154秒,平均速度150 MB/秒。

rgw到osd.1

在osd.1节点上使用nc监听17480端口的网络I/O请求:

[root@node3 ~]# nc -v -l -n 17480 > /dev/nullNcat: Version 6.40 ( /ncat )Ncat: Listening on :::17480Ncat: Listening on 0.0.0.0:17480Ncat: Connection from 192.168.0.97.Ncat: Connection from 192.168.0.97:35418.

在rgw节点上发起网络I/O请求:

[root@node2 ~]# time dd if=/dev/zero | nc -v -n 192.168.0.98 17480Ncat: Version 6.40 ( /ncat )Ncat: Connected to 192.168.0.98:17480.^C记录了30140790+0 的读入记录了30140789+0 的写出15432083968字节(15 GB)已复制,111.024 秒,139 MB/秒real 1m51.026suser 1m21.996ssys 2m20.039s

网络I/O总流量15GB,耗时111.024秒,平均速度139 MB/秒。

rgw到osd.2

在osd.2节点上使用nc监听17480端口的网络I/O请求:

[root@node4 ~]# nc -v -l -n 17480 > /dev/nullNcat: Version 6.40 ( /ncat )Ncat: Listening on :::17480Ncat: Listening on 0.0.0.0:17480Ncat: Connection from 192.168.0.97.Ncat: Connection from 192.168.0.97:39156.

在rgw节点上发起网络I/O请求:

[root@node2 ~]# time dd if=/dev/zero | nc -v -n 192.168.0.99 17480Ncat: Version 6.40 ( /ncat )Ncat: Connected to 192.168.0.99:17480.^C记录了34434250+0 的读入记录了34434249+0 的写出17630335488字节(18 GB)已复制,112.903 秒,156 MB/秒real 1m52.906suser 1m23.308ssys 2m22.487s

网络I/O总流量18GB,耗时112.903秒,平均速度156 MB/秒。

总结:集群内不同节点间,网络I/O平均在150MB/s左右。跟实际情况相符,因为本集群是千兆网卡。

rados集群性能测试

准备工作

查看ceph cluster的osd分布情况:

[root@node1 ~]# ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-6 0 rack test-bucket-5 0 rack demo-1 0.86458 root default-2 0.28819host node20 0.28819 osd.0 up 1.000001.00000-3 0.28819host node31 0.28819 osd.1 up 1.000001.00000-4 0.28819host node42 0.28819 osd.2 up 1.000001.00000

可见该cluster部署了3个osd节点,3个都处于up状态(正常work)。

为rados集群性能测试创建一个test pool,此池为 64 个 PGs,数据存三份;

[root@node1 ~]# ceph osd pool create test 64 64pool 'test' created[root@node1 ~]# ceph osd pool get test sizesize: 3[root@node1 ~]# ceph osd pool get test pg_numpg_num: 64

查看test pool默认配置:

[root@node1 test]# ceph osd dump | grep testpool 12 'test' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 37 flags hashpspool stripe_width 0

查看test poll资源占用情况:

[root@node1 test]# rados -p test dfpool name KBobjects clonesdegradedunfound rd rd KB wr wr KBtest 0 0 0 0 0 0 0 0 0total used 27044652192total avail854232624total space928512000

写性能测试

测试写性能

[root@node1 ~]# rados bench -p test 60 write --no-cleanupMaintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 60 seconds or 0 objectsObject prefix: benchmark_data_node1_26604sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)0 0 0 0 0 0 - 0116 31 15 59.9966 60 0.953952 0.614647216 38 22 43.9954 281.38736 0.781039316 46 30 39.9958 321.878011.06765416 61 45 44.9953 601.193441.23191516 76 60 47.9949 60 0.9930451.17022616 91 75 49.9946 601.003031.1498716 106 90 51.4231 60 0.9995741.13609816 119 103 51.4945 521.005041.12779916 122 106 47.106 121.206681.131731016 122 106 42.3954 0 -1.131731116 125 109 39.632 62.89961.182131216 137 121 40.3289 483.907231.452721316 151 135 41.5339 561.100431.473331416 169 153 43.7096 72 0.9275721.41291516 181 165 43.9952 481.028791.387391616 196 180 44.9951 601.083981.366651716 209 193 45.4068 52 1.1171.347421816 212 196 43.5508 121.307031.34681916 215 199 41.8902 122.799171.36874-03-20 17:06:48.745397 min lat: 0.229762 max lat: 4.09713 avg lat: 1.40039sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 218 202 40.3956 123.497841.400392116 225 209 39.8051 284.189871.488512216 241 225 40.9046 641.006291.531482316 256 240 41.7345 601.180981.498692416 271 255 42.4953 601.00171.473192516 286 270 43.1952 601.001181.450672616 299 283 43.5337 521.198131.433482716 302 286 42.3657 121.306071.432152816 302 286 40.8527 0 -1.432152916 305 289 39.8577 63.004611.448473016 316 300 39.9956 443.737211.540233116 331 315 40.6407 600.971031.545263216 346 330 41.2455 60 0.9999261.52143316 361 345 41.8136 601.004111.501693416 376 360 42.3483 601.000891.483553516 386 370 42.2811 401.202721.47273616 389 373 41.4399 121.506161.472963716 392 376 40.6442 123.1067 1.4863816 395 379 39.8903 123.908521.505183916 402 386 39.5854 284.12175 1.551-03-20 17:07:08.747628 min lat: 0.229762 max lat: 4.29984 avg lat: 1.56868sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)4016 418 402 40.1956 641.076591.568684116 433 417 40.6784 60 0.9999551.549394216 448 432 41.1383 601.176641.532564316 463 447 41.5768 601.002971.516954416 478 462 41.9953 601.004661.502344516 479 463 41.151 41.195121.501684616 482 466 40.5172 122.61181.508824716 485 469 39.9105 123.31231.520344816 493 477 39.7456 324.009711.559014916 508 492 40.1588 601.010541.576115016 523 507 40.5555 60 0.9960041.558695116 538 522 40.9366 60 0.9977221.544645216 553 537 41.3031 601.198151.531135316 568 552 41.6557 601.212981.518645416 572 556 41.1806 161.499321.517975516 572 556 40.4318 0 -1.517975616 575 559 39.9241 63.095591.526435716 583 567 39.785 323.992291.559235816 595 579 39.9266 481.377061.579525916 612 596 40.4022 680.898731.56855-03-20 17:07:28.749935 min lat: 0.229762 max lat: 4.29984 avg lat: 1.56738sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)6016 624 608 40.5288 481.655181.56738Total time run: 60.821654Total writes made:625Write size: 4194304Object size: 4194304Bandwidth (MB/sec):41.1038Stddev Bandwidth: 23.0404Max bandwidth (MB/sec): 72Min bandwidth (MB/sec): 0Average IOPS: 10Stddev IOPS: 5Max IOPS:18Min IOPS:0Average Latency(s):1.55581Stddev Latency(s):0.981606Max latency(s): 4.29984Min latency(s): 0.229762

如果加上可选参数--no-cleanup,那么测试完之后,不会删除该池里面的数据。里面的数据可以继续用于测试集群的读性能。

从以上测试数据可以看出:数据写入时的平均带宽是41MB/sec,最大带宽是72,带宽标准差是23(反应网络稳定情况)。

读性能测试

测试读性能

[root@node1 ~]# rados bench -p test 60 randsec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)0 0 0 0 0 0 - 0116 101 85 339.935 340 0.270579 0.147057216 145 129 257.955 176 0.246583 0.220784316 191 175 233.297 1840.53086 0.253465416 236 220 219.968 180 0.0326233 0.268682516 281 265 211.971 180 0.528696 0.286853616 328 312 207.973 188 0.0203012 0.295207716 371 355 202.831 172 0.283736 0.303328816 415 399 199.475 176 0.5083350.30781916 461 445 197.753 1840.24398 0.3125031016 510 494 197.576 196 0.4995860.318021116 556 540 196.34 184 0.259304 0.3207081216 602 586 195.31 184 0.745053 0.3207771316 646 630 193.823 176 0.04221890.323861416 692 676 193.12 184 0.0467997 0.3266071516 735 719 191.711 172 0.0272729 0.3274321616 777 761 190.228 168 0.0160831 0.3263811716 821 805 189.39 176 0.483385 0.3302621816 865 849 188.645 176 0.0279903 0.3300381916 913 897 188.82 192 0.237649 0.332631-03-20 17:08:51.231039 min lat: 0.00844047 max lat: 0.964959 avg lat: 0.332994sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 962 946 189.178 196 0.0115256 0.33299421161009 993 189.121 1880.26545 0.334135221610521036 188.342 172 0.502163 0.335411231610951079 187.631 172 0.191482 0.335954241611401124 187.312 180 0.01871870.33593251611871171 187.339 188 0.0128352 0.336301261612321216 187.056 180 0.0260001 0.336886271612781262 186.942 184 0.0148474 0.336478281613241308 186.836 184 0.723555 0.337355291613671351 186.324 172 0.0246515 0.339247301614121396 186.113 180 0.0120403 0.339659311614601444 186.302 192 0.569969 0.338129321615061490 186.229 184 0.0316037 0.340041331615511535 186.04 180 0.0273989 0.340237341615961580 185.862 180 0.525298 0.340735351616381622 185.351 168 0.01010450.34052361616861670 185.535 192 0.01591730.34091371617311715 185.385 180 0.986173 0.339939381617751759 185.138 176 0.0152587 0.340806391618181802184.8 172 0.216865 0.342337-03-20 17:09:11.233088 min lat: 0.0080755 max lat: 1.2 avg lat: 0.342772sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)401618631847 184.68 180 0.298863 0.342772411619071891 184.468 176 0.539937 0.341949421619501934 184.17 172 0.501967 0.343196431619971981 184.259 188 0.2585210.34255441620432027 184.253 184 0.0441231 0.343493451620882072 184.158 180 0.302963 0.343621461621352119 184.241 188 0.01982670.34337471621792163 184.065 1760.26388 0.343744481622242208 183.98 180 0.274291 0.343872491622682252 183.817 176 0.0345847 0.343383501623142298 183.82 184 0.0555181 0.344454511623592343 183.745 180 0.288888 0.344362521624052389 183.749 184 0.280761 0.344848531624472431 183.452 168 0.01357150.34438541624962480 183.684 196 0.259152 0.344883551525422527 183.762 188 0.02319590.34473561525852570 183.552 172 0.235059 0.345157571626272611 183.208 164 0.2729160.3454581626742658 183.29 188 0.534074 0.345242591627172701 183.099 172 0.261746 0.345621-03-20 17:09:31.235266 min lat: 0.0080755 max lat: 1.2 avg lat: 0.344692sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)601627652749 183.247 192 0.213941 0.344692Total time run: 60.297422Total reads made:2765Read size: 4194304Object size:4194304Bandwidth (MB/sec): 183.424Average IOPS: 45Stddev IOPS:5Max IOPS: 85Min IOPS: 41Average Latency(s): 0.346804Max latency(s): 1.2Min latency(s): 0.0080755

从以上测试数据可以看出:数据读取时的平均带宽是183MB/sec,平均延时是0.3 sec,平均IOPS是45。

测试数据清除

rados -p test cleanup

删除test池:

[root@node1 ~]# ceph osd pool delete test test --yes-i-really-really-mean-itpool 'test' removed

结论

针对不同大小的block对Rados、RBD进行了读写性能测试,最终统计结果如下:

ceph 针对大块文件的读写性能非常优秀,高达2GB/s。rados读比写高出10倍的速率,适合读数据的高并发场景。pool配置:2个副本比3个副本的性能高出很多,但官方推荐使用3个副本,因为2个不够安全;若机器配置不算很差(4核8G以上),ceph很容易达到1G带宽的限制阀值,若想继续提升ceph性能,需考虑提升带宽阀值。设置更多的PG值可以带来更好的负载均衡,但从测试来看,设置较大的PG值并不会提高性能。将fileStore刷新器设置为false对性能有不错的提升。

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。