Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Current »

On the evening of the 23rd, LHCb started running a new job type called WGprod at RAL at approximately 5PM.

These jobs pulled down large amounts of data through the WN gateways. At 6PM the WN gateways started passing vector reads through to the cluster. Probably because the XCaches were running out of resource to cache new requests.

The number of client operations on Echo quickly spiked to >100k IOPS, which is the approximate limit of the cluster.

The client operation time slowed down dramatically and transfers started failing. The cluster recovered by itself after the load subsided.

This was by far the highest consistent IO that has ever been requested from Echo by an LHC VO.

Alex R suggested the reduction in prefetch on the WN XCaches will reduce the amount of passthrough. We could also potentially increase the amount of memory available to the XCaches.

Prefetch off change was deployed on the batch farm but the large amount of lhcb jobs caused the '21 generation workernodes to stall

image-20240424-122307.png

LHCb WGProduction Job failures. Most of the jobs failed due to vector read timeouts.

913f1f4e7a6e7a0eddaec02eb928474a.png

Looks like around 15k jobs were enough to overload ECHO and cause ~1/3rd of requests to fail.

abf39eea3b979ad7adf947a913b169c3.png


image-20240424-131436.png

image-20240424-122446.png

image-20240424-122715.png

image-20240424-122921.png

  • No labels