Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

CHEP 2024:

Enhancing XRootD Load Balancing for High-Throughput transfers

To address the need for high transfer throughput for projects such as the LHC experiments, including the upcoming HL-LHC, it is important to make optimal and sustainable use of our available capacity. Load balancing algorithms play a crucial role in distributing incoming network traffic across multiple servers, ensuring optimal resource utilization, preventing server overload, and enhancing performance and reliability. At the Rutherford Appleton Laboratory (RAL), the UK's Tier-1 centre for the Worldwide LHC Computing Grid (WLCG), we used started with a DNS round robin then moved to XRootD's cluster management service component, which has an active load balancing algorithm to distribute traffic across 26 servers, but encountered its limitations when the system as a whole is under heavy load. We describe our attempted tuning of the configuration of the existing algorithm before proposing a new tuneable, dynamic load-balancer based on a weighted random selection algorithm.– High-throughput Gateways:

Achieving 100Gb/s data rates with XRootD - Preparing for HL-HLC and SKA

To address the needs of forthcoming projects such as the Square Kilometre Array (SKA) and the HL-LHC, there is a critical demand for data transfer nodes (DTNs) capable of achieving 100Gb/s of data movement. This high throughput can be attained through combinations of increased concurrency of transfers or by optimising and improvements in the speed of individual transfers. At the Rutherford Appleton Laboratory (RAL), the UK's Tier-1 centre for the Worldwide LHC Computing Grid (WLCG), and a initial site for the UK SKA Regional Centre (SRC), we have implemented provisioned 100GbE XRootD servers in anticipation of the SKA's operational scale-uppreparation for SKA operations. This presentation details the efforts undertaken to reach 100Gb/s data ingress and egress rates using the WebDAV protocol through XRootD endpoints. The includes testing using , including the use of a novel XRootD plug-in designed to asses XRootD performance independently of physical storage backend. The effects of tuning inside XRootD and by tuning the cephFS storage backend Results are presented for transfer tests against a CephFS storage backend under different configuration settings (e.g. via tunings to file layoutlayouts) is presented. We will discuss the challenges encountered, bottlenecks identified, and insights gained, along with a description of the most effective solutions developed to date and areas of future activities.

Sustainability at the RAL Tier-1

CHEP 2023:

Jira Legacy
serverSystem Jira
serverId929eceee-34b0-3928-beeb-a1a37de31a8b
keyXRD-51

...