Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

To address the need for high transfer throughput for projects such as the LHC experiments, including the upcoming HL-LHC, it is important to make optimal and sustainable use of our available capacity. Load balancing algorithms play a crucial role in distributing incoming network traffic across multiple servers, ensuring optimal resource utilization, preventing server overload, and enhancing performance and reliability. At the Rutherford Appleton Laboratory (RAL), the UK's Tier-1 centre for the Worldwide LHC Computing Grid (WLCG), we used started with a DNS round robin then moved to XRootD's cluster management service component, which has an active load balancing algorithm to distribute traffic across 26 servers, but encountered its limitations when the system as a whole is under heavy load. We describe our attempted tuning of the configuration of the existing algorithm before proposing a new tuneable, dynamic load-balancer based on a weighted random selection algorithm.

...

– High-throughput Gateways:

Achieving 100Gb/s Data Rates with XRootD - Preparing for SKA and HL-HLC

To address the needs of forthcoming projects such as the Square Kilometre Array (SKA) and the HL-LHC, there is a critical demand for data transfer nodes (DTNs) capable of achieving 100Gb/s of data movement. This high throughput can be attained through concurrency of transfers or by optimising the speed of individual transfers. At the Rutherford Appleton Laboratory (RAL), the UK's Tier-1 centre for the Worldwide LHC Computing Grid (WLCG), and a initial site for the UK SKA Regional Centre (SRC), we have implemented provisioned 100GbE XRootD servers in anticipation of the SKA's operational scale-uppreparation for SKA operations. This presentation details the efforts undertaken to reach 100Gb/s data ingress and egress rates using the WebDAV protocol through XRootD endpoints. The includes , including the testing using a novel XRootD plug-in designed to asses XRootD performance independently of physical storage backend. The effects of tuning inside XRootD and by tuning the cephFS , and also against a CephFS storage backend (e.g. via file layout tunings) is are also presented. We will discuss the challenges encountered, bottlenecks identified, and insights gained, along with a description of the most effective solutions developed to date and areas of future activities.

...