Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

Item

Presenter

Notes

Operational Issues
Gateways and WNs:
- Current status and upcoming changes

(Gateway Auth failures)

Upgrades of GWs this week; Main GWs complete; Alice in progress

AAA gateways with large numbers of connections:

image-20241128-123546.pngImage Added

(gw10):
 3305 CLOSE-WAIT
37931 ESTAB

~3k ESTAB from remote hosts, 2.8k CLOSE_WAIT from remote hosts
xrd.timeout idle 60m read 10
in current config

cms-aaa naming convention

cms-aaa is the only remaining personality to use proxy/ceph as the xrootd service names


Separate naming convention would be more appropriate, to have main/supporting

(not so urgent).

CC created, but due to be reviewed December

XRootD Managers De-VMWareification

Thomas, Jyothish (STFC,RAL,SC)

View file
nameRedirector de-VMWareification.pptx

Option 2 preferred for efficiency, but Option 1 decided on

Option 1 would be simpler to implement for a temporary fix, as the move would be reversed

antares tpc nodes to be moved to an echo leafsw, to confirm ipv4 real estate with James

Compilation and rollout status with XrdCeph and rocky 8: 5.7.x

Thomas, Jyothish (STFC,RAL,SC)

Shoveler

Katy Ellis

Shoveler installation and monitoring

Deletion studies through RDR

Ian Johnson

Deletions

Jira Legacy
serverSystem Jira
serverId929eceee-34b0-3928-beeb-a1a37de31a8b
keyXRD-83

XRootD Writable Workernode  Gateway Hackaton

Thomas, Jyothish (STFC,RAL,SC)

XRootD Writable Workernode  Gateway Hackaton (XWWGH)

Tues 12th Nov 1600
Hackaton writeable workernode

Outcomes

Xrootd testing framework

XRootD Site Testing Framework

100 GbE Gateway testing:
SKA / Tier-1

James Walder

/wiki/spaces/UK/pages/215941180

UKSRC Storage Architecture

Tokens Status

  • Operational

  • Technical

  • Accounting

...

on GGUS:

Site reports

Lancaster: Day 2 of our mini-DC run was a bit spoilt by Ceph having a wobbly, presumably because of some sick OSDs. As Matt understands it, we hit the last little bit of backfilling backlog we had to do but rather then be a weight lifted off our cluster things “cramped up” focussing operations on a small number of PGs. (Gerard can correct me if I have the wrong end of the wrong stick). To top it off one of our OSDs keeled over physically.

Our bindfs tests were disappointing, our bonnie tests have read rates through the bind mount ~1/8th that of what we see through the regular one. Write seems less effected, and latency doesn’t appear to be impacted. Using the bindfs “multithread” option didn’t help much. Probably culprit is the tiny (and AFAICS unchangable) default bindfs blocksize But these were noddy bonnie tests, and maybe this is the wrong tool? Would noddy dd be better?

Glasgow

✅ Action items

How to replace the original functionality of fstream monitoring, now opensearch has replaced existing solutions.

...