...
on GGUS:
Site reports
Lancaster: Day 2 of our mini-DC run was a bit spoilt by Ceph having a wobbly, presumably because of some sick OSDs. As Matt understands it, we hit the last little bit of backfilling backlog we had to do but rather then be a weight lifted off our cluster things “cramped up” focussing operations on a small number of PGs. (Gerard can correct me if I have the wrong end of the wrong stick). To top it off one of our OSDs keeled over physically.
We also had an (unrelated to the mini-DC, but maybe caused by a large burst of LSST jobs) a bunch of WNs had their cephfs mounts enter a horrid state, for the third time this has happened (discussed in storage yesterday and tracked to a likely bug in the kernel), from dmesg:
[Tue Nov 26 16:02:51 2024] libceph: wrong peer, want (1)10.41.12.56:6929/2269030683, got (1)10.41.12.56:6929/324345577
[Tue Nov 26 16:02:51 2024] libceph: osd435 (1)10.41.12.56:6929 wrong peer at address
Our bindfs tests were disappointing, our bonnie tests have read rates through the bind mount ~1/8th that of what we see through the regular one. Write seems less effected, and latency doesn’t appear to be impacted. Using the bindfs “multithread” option didn’t help much. Probably culprit is the tiny (and AFAICS unchangable) default bindfs blocksize But these were noddy bonnie tests, and maybe this is the wrong tool? Would noddy dd be better?wailing and gnashing of teeth
Glasgow
✅ Action items
How to replace the original functionality of fstream monitoring, now opensearch has replaced existing solutions.
...