【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>>
1. Problem
My demo ceph cluster outputs the error message as below:
ceph1:~ # ceph health detail HEALTH_ERR 1 pgs inconsistent; 1 scrub errors pg 1.32 is active+clean+inconsistent, acting [0,2,1] 1 scrub errors
So, the pg in question is 1.32, and is acting on osd[0,1,2]
2. Let's fix it
On one of the osd node, i.e. ceph1:
ceph1:~ # ceph pg repair 1.32
Check if we made it:
ceph1:~ # ceph -w cluster 388fe838-f746-48cb-a067-2cf73859994d health HEALTH_OK monmap e1: 3 mons at {ceph1=147.2.208.109:6789/0,ceph2=147.2.208.44:6789/0,ceph3=147.2.208.73:6789/0} election epoch 138, quorum 0,1,2 ceph2,ceph3,ceph1 osdmap e88: 3 osds: 3 up, 3 in pgmap v994: 128 pgs, 2 pools, 135 bytes data, 2 objects 109 MB used, 45937 MB / 46046 MB avail 128 active+clean recovery io 0 B/s, 0 objects/s
2016-08-26 13:12:13.997970 mon.0 [INF] pgmap v994: 128 pgs: 128 active+clean; 135 bytes data, 109 MB used, 45937 MB / 46046 MB avail; 0 B/s, 0 objects/s recovering 2016-08-26 13:12:31.684651 mon.0 [INF] HEALTH_OK
Thanks for this blog: ceph-manually-repair-object [1] http://ceph.com/planet/ceph-manually-repair-object/
来源:oschina
链接:https://my.oschina.net/u/2475751/blog/738231