Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

CHEP 2023:

XRD-51 - Getting issue details... STATUS

DRAFT!

Data access for the LHC experiments and increasing numbers of other HEP and astronomy communities is provided at UK Tier-1 facility at RAL through its ECHO storage.
ECHO - currently in excess of 40PB of usable space - is a Ceph-backed erasure-coded object store, with frontend access to data is provided via XRootD - using the XrdCeph plugin - or gridFTP, via the libradosstriper library of Ceph.

The storage must service the needs of: high-throughput compute, with staged and direct file access passing through an XCache on each workernode; data access to compute running at storageless sites; and, managed inter-site data transfers using the recently adopted HTTPs protocol (using WebDav), including multihop data transfers to and from RAL’s newly commissioned CTA tape endpoint.

A review of the experiences of running an Object Store within these data workflows, is presented, including the details of the improvements necessary for the transition to WebDav from GridFTP for most inter-site data movements, and enhancements for direct-IO access, where the development and optimisation of buffering and range coalescence strategies is explored.

In addition to serving the requirements of LHC Run-3, preparations for Run-4, and for large astronomy experiments is underway. One example is for ROOT-based data formats, the evolution from a TTree to RNTuple data structure provides an opportunity for storage providers to optimise and benchmark against this new format. A comparison of the current performance between data formats within ECHO is presented and the details of potential improvements explored.

CHEP 2019:
XRootD and Object Store: A new paradigm

https://doi.org/10.1051/epjconf/202024504006

Abstract

The XRootD software framework is essential for data access at WLCG sites. The WLCG community is exploring and expanding XRootD functionality. This presents a particular challenge at the RAL Tier-1 as the Echo storage service is a Ceph based Erasure Coded object store. External access to Echo uses gateway machines which run GridFTP and caching servers. Local jobs access Echo via caches on every worker node, but it is clear there are inefficiencies in the system. Remote jobs also access data via XRootD on Echo. For CMS jobs this is via the AAA service. ATLAS, who are consolidating their storage at fewer sites, are increasingly accessing job input data remotely. This paper describes the continuing work to optimise both local and remote data access by testing different caching methods.

  • No labels