\uD83D\uDDD3 Date
\uD83D\uDC65 Participants
Lancs:
Glasgow:
Apologies:
CC:
\uD83E\uDD45 Goals
List of Epics
New tickets
Consider new functionality / items
Detailed discussion of important topics
Site report activity
\uD83D\uDDE3 Discussion topics
Current status of Echo Gateways / WNs testing
Recent sandbox’s for review / deployments:
Item | Presenter | Notes | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Operational Issues | |||||||||||
Checksums issue with an ATLAS file | https://github.com/xrootd/xrootd/issues/2388 https://ggus.eu/index.php?mode=ticket_info&ticket_id=169360 Checksum requested before whole file is updated. No ability to do stale checksum check in ceph, so original checksum ‘sticks’ to the file. fix in place RAL side by clearing checksums after a write is complete | ||||||||||
cms-aaa naming convention | cms-aaa is the only remaining personality to use proxy/ceph as the xrootd service names Separate naming convention would be more appropriate, to have main/supporting (not so urgent). CC created, and sandbox is prepared. | ||||||||||
XRootD Managers De-VMWareification |
Option 2 preferred for efficiency, but Option 1 decided on Option 1 would be simpler to implement for a temporary fix, as the move would be reversed antares tpc nodes to be moved to an echo leafsw, to confirm ipv4 real estate with James hosts moved to rack | ||||||||||
Compilation and rollout status with XrdCeph and rocky 8: 5.7.x | 5.7.2 published. 5.7.2 skipped on farm due to pfc bug, possible RAL release 5.7.3 equivalent with a fix for that and 5.6.0 client compatibility | ||||||||||
Shoveler | |||||||||||
On the fly Checksums
| Simple PoC calculating Adler32 in the XrdCeph plugin mostly working. Neglible reduction in write rate compared to not calculating Adler32 on-the-fly. | ||||||||||
Deletions |
| NTR | |||||||||
XRootD Writable Workernode Gateway Hackaton | XRootD Writable Workernode Gateway Hackaton (XWWGH) sandbox with fixes present, ready for testing
| ||||||||||
Xrootd testing framework | Discussion in Storage Meeting in how to integrate the various testing structures within the UK | ||||||||||
100 GbE Gateway testing: | UKSRC - XRootD used for SRCNet testing Teir-1 cabled, but awaiting some work to progress on the Swtich. | ||||||||||
UKSRC Storage Architecture | |||||||||||
Tokens Status |
|
on GGUS:
Site reports
Lancaster: On this week’s Lancaster Rant: We had a period of storage sadness last night. Atlas deleted ~20k files in a space of about 30 minutes, and whilst Ceph was recovering LSST jobs came from behind and gave the storage a wedgie with high IOPs. Cephfs got slow, xrootd servers got sad, some fell over, cephfs got more unhappy. It was a whole thing, and Gerard spent the morning restarting xroot servers with his new scripts.
The point of my ranting is it seems half our problems could be solved if we could get xrootd to rate/connection limit so things didn’t get in so bad a state that we required to reboot things. We don’t think xroot has this functionality in itself (James reminded me of the throttle plugin but we don’t know if this works with http). The preliminary thought would be something on the redirector that, if detecting problems or high load, rather then redirect to the least-worst-off xroot server just returned a polite “try again later” (503 ?).
In other news as discussed on Wednesday we’ve been looking at ways we could remove TLS from internal transfers and how to xroot-plumb that together, but Jyothish may have crushed our hopes there by pointing out that scitokens require tls to be enabled, so such a move wouldn’t be future proof - or would have to have extra plumbing (as these ponderings are accompanied by the idea of replacing internal auth for at least some users with something faster- this again is LSST driven with their teeny-tiny files causing hassle).
XrootD manages can send a wait response for requests if no server is available - this can be done by setting a lower maxload per server.
Scrub errors/bad sectors on some disks causing issues - recreating the OSDs and backfilling usually done at RAL.
for slow ops - read IO time and slow op distribution across PG/OSDs monitoring can indicate whether a bad disk is the issue.
Dan has a script that periodically writes and monitors data.
Glasgow
✅ Action items
How to replace the original functionality of fstream monitoring, now opensearch has replaced existing solutions.