本文由Ceph中国社区-Leon 翻译、luokexue校稿。英文出处:Introducing ceph-lazy 欢迎加入CCTG
这篇文章由Gregory Charot(该工具作者)共同完成。有没有发现你自己要敲好多管道符来得到一个Ceph CLI没有直接给出的特定值?或者是要努力移除上下文来得到一个特定值?这种情况通常会导致用又快又杂的sed/awk
命令结尾(最好的场景),将其作为别名,或者被遗忘在shell历史纪录中,直到你下次需要用它。如今有了ceph-lazy,一个组合了这些需要多线程处理或文本操作的查询命令的shell工具。
从最基础的查询,如:
列出OSD节点
节点上运行的OSD
到最复杂的,比如:
获取存放一个特定PG的节点
查询有效RBD镜像大小(Jewel之前的版本)
获取一个节点的总用量或列出一个RBD镜像所在的节点/OSD
同样可以查询PG和OSD用量的一些基本状态报告。
目前的命令列表如下:
COMMANDS ========= Host ----- host-get-osd hostname List all OSD IDs attached to a particular node. host-get-nodes List all storage nodes. host-osd-usage hostname Show total OSD space usage of a particular node (-d for details). host-all-usage Show total OSD space usage of each nodes (-d for details) Placement groups ----------------- pg-get-host pgid Find PG storage hosts (first is primary) pg-most-write Find most written PG (nb operations) pg-less-write Find less written PG (nb operations) pg-most-write-kb Find most written PG (data written) pg-less-write-kb Find less written PG (data written) pg-most-read Find most read PG (nb operations) pg-less-read Find less read PG (nb operations) pg-most-read-kb Find most read PG (data read) pg-less-read-kb Find less read PG (data read) pg-empty Find empty PGs (no stored object) RBD ---- rbd-prefix pool_name image_name Return RBD image prefix rbd-count pool_name image_name Count number of objects in a RBD image rbd-host pool_name image_name Find RBD primary storage hosts rbd-osd pool_name image_name Find RBD primary OSDs rbd-size pool_name image_name Print RBD image real size rbd-all-size pool_name Print all RBD images size (Top first) OSD ---- osd-most-used Show the most used OSD (capacity) osd-less-used Show the less used OSD (capacity) osd-get-ppg osd_id Show all primaries PGS hosted on a OSD osd-get-pg osd_id Show all PGS hosted on a OSD Objects -------- object-get-host pool_name object_id Find object storage hosts (first is primary)
一些有趣的命令:
$ ceph-lazy host-all-usage
Host:ceph01 | OSDs:2 | Total_Size:39.0GB | Total_Used:2.8GB | Total_Available:36.1GB Host:ceph02 | OSDs:2 | Total_Size:39.0GB | Total_Used:2.8GB | Total_Available:36.1GB Host:ceph03 | OSDs:2 | Total_Size:39.0GB | Total_Used:2.8GB | Total_Available:36.1GB
很有用的信息,可以看到数据是否平均分布于集群上:
$ ceph-lazy rbd-host rbd myrbd ceph01 ceph02 ceph03
跟3节点的集群关系不大,但对于更大的集群,尤其是定制了CRUSH map的,会很有意思。
对于不是运行Jewel版本或者没有rbd du
命令的用户:
$ ceph-lazy rbd-all-size rbd
2614.32 MB - myrbd
500 MB - rbd01
150 MB - rbd03
50 MB - rbd02
找到存储PG的主机(第一个是主):
$ ceph-lazy pg-get-host 0.30
OSD:osd.1 | Host:osd02 OSD:osd.4 | Host:osd01 OSD:osd.3 | Host:osd03
可以在github上的Ceph lazy找到这个工具,需要一些依赖,比如解析json的jq
和一些命令要使用的bc
计算器。
厌倦了管道符?用lazy吧!

文章转载自Ceph开源社区,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。




