Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

CHEP 2024:

To address the need for high transfer throughput for projects such as the LHC experiments, including the upcoming HL-LHC, it is important to make optimal and sustainable use of our available capacity. Load balancing algorithms play a crucial role in distributing incoming network traffic across multiple servers, ensuring optimal resource utilization, preventing server overload, and enhancing performance and reliability. At the Rutherford Appleton Laboratory (RAL), the UK's Tier-1 centre for the Worldwide LHC Computing Grid (WLCG), we used XRootD's cluster management service component, which has an active load balancing algorithm to distribute traffic across 26 servers, but encountered its limitations when the system as a whole is under heavy load. We describe our attempted tuning of the configuration before proposing a new tuneable, dynamic load-balancer based on a weighted random selection algorithm.

– High-throughput Gateways:

To address the needs of forthcoming projects such as the Square Kilometre Array (SKA) and the HL-LHC, there is a critical demand for data transfer nodes (DTNs) capable of achieving 100Gb/s of data movement. This high throughput can be attained through concurrency of transfers or by optimising the speed of individual transfers. At the Rutherford Appleton Laboratory (RAL), the UK's Tier-1 centre for the Worldwide LHC Computing Grid (WLCG), and a site for the UK SKA Regional Centre (SRC), we have implemented 100GbE XRootD servers in anticipation of the SKA's operational scale-up. This presentation details the efforts undertaken to reach 100Gb/s data ingress and egress rates using the WebDAV protocol through XRootD endpoints. The includes testing using a novel XRootD plug-in designed to asses XRootD performance independently of physical storage backend. The effects of tuning inside XRootD and by tuning the cephFS storage backend (e.g. via file layout) is presented. We will discuss the challenges encountered, bottlenecks identified, and insights gained, along with a description of the most effective solutions developed to date and areas of future activities.

CHEP 2023:

XRD-51 - Getting issue details... STATUS

DRAFT!

Data access for the LHC experiments and increasing numbers of other HEP and astronomy communities is provided at UK Tier-1 facility at RAL through its ECHO storage.
ECHO - currently in excess of 40PB of usable space - is a Ceph-backed erasure-coded object store, with frontend access to data is provided via XRootD - using the XrdCeph plugin - or gridFTP, via the libradosstriper library of Ceph.

The storage must service the needs of: high-throughput compute, with staged and direct file access passing through an XCache on each workernode; data access to compute running at storageless sites; and, managed inter-site data transfers using the recently adopted HTTPs protocol (using WebDav), including multihop data transfers to and from RAL’s newly commissioned CTA tape endpoint.

A review of the experiences of running an Object Store within these data workflows, is presented, including the details of the improvements necessary for the transition to WebDav from GridFTP for most inter-site data movements, and enhancements for direct-IO access, where the development and optimisation of buffering and range coalescence strategies is explored.

In addition to serving the requirements of LHC Run-3, preparations for Run-4, and for large astronomy experiments is underway. One example is for ROOT-based data formats, the evolution from a TTree to RNTuple data structure provides an opportunity for storage providers to optimise and benchmark against this new format. A comparison of the current performance between data formats within ECHO is presented and the details of potential improvements explored.

CHEP 2019:
XRootD and Object Store: A new paradigm

https://doi.org/10.1051/epjconf/202024504006

Abstract

The XRootD software framework is essential for data access at WLCG sites. The WLCG community is exploring and expanding XRootD functionality. This presents a particular challenge at the RAL Tier-1 as the Echo storage service is a Ceph based Erasure Coded object store. External access to Echo uses gateway machines which run GridFTP and caching servers. Local jobs access Echo via caches on every worker node, but it is clear there are inefficiencies in the system. Remote jobs also access data via XRootD on Echo. For CMS jobs this is via the AAA service. ATLAS, who are consolidating their storage at fewer sites, are increasingly accessing job input data remotely. This paper describes the continuing work to optimise both local and remote data access by testing different caching methods.

  • No labels