XRootDDeploymentWorkflow

Code changes

dev machines (el7 unless specified otherwise):

172.16.105.133 - dev and server/client with dev echo access

172.16.105.206 - centos 8 dev machine

172.16.101.193 - dev machine with no server/client

ceph-dev-gw2 - alice test gw with dev echo access

ceph-gw8 - prod test gateway with prod echo access

Compiling core xrootd

follow instruction in CompilingXRootD

compiling xrdceph

go to xrootd-ceph/packaging

run ~/ compile.sh.sh for normal xrdceph, ~/ compilebuff.sh for buffered version. replace version numbers in the script if needed.

Compiling libradosstriper

1). Build a directory structure containing dirs 'etc/systemd/' and 'opt/lib':

2). Contents of lower levels of these directories:

cat etc/systemd/system/xrootd@ceph.service.d/:

[Service]

Environment="LD_LIBRARY_PATH=/opt/lib/ceph/"

ls opt/lib/ceph/

libceph-common.so librados.so librados.so.2.0.0 libradosstriper.so.1

libceph-common.so.0 librados.so.2 libradosstriper.so libradosstriper.so.1.0.0

(compiled from CompilingCephClientCode )

  1. tar cvfz essential-striper.tgz etc opt from the root directory

4). fpm --verbose -s tar -t rpm --name newstriperanddeps --version 14.2.15 --iteration 1 --prefix / --description "New RADOS Striper library with lockless reads" essential-striper.tgz

PR & code review

standard github workflow:

  1. fork core STFC repo

  2. work changes

  3. rebasing as necessary

  4. push changes to remote

  5. create PR

Generating RPMS

The compile scripts automatically create RPMS. RPMS can be similarly created for core XrootD by following similar steps as the scripts do.

Local testing

This is done in VMs with dev echo access (e.g. 176.16.105.133)

Dev testing

Done in dev gws (e.g. dev-gw2/dev-gw3)

Pushing RPMS to repos server

run:
bash getrpms.sh <version_no>:

#!/bin/sh
rm RPMS/*
scp root@172.16.105.133:~/rpmbuild/RPMS/x86_64/*${1}* RPMS
scp "RPMS/"* root@repos-1.gridpp.rl.ac.uk:/srv/yum/xrootd-ceph/nautilus/el7/x86_64

to send the rpms to the stfc repo. Then ssh into the repo, cd into the repo directory and run createrepo --update .

Sandboxing and pre-production testing

  1. ssh into aquilon

  2. run: aq add sandbox --sandbox <sandbox_name>

  3. edit the necessary files:

    1. vim ./ral-tier1/features/ceph/xrootd-gw/config.pan - xrootd gateways

    2. vim ./ral-tier1/features/ceph/xrootd-webdav-gw/config.pan - webdav gateways

    3. vim ./ral-tier1/features/ceph/xrootd-gw-alice/config.pan - alice gateways

  4. aq manage --hostname <hostname> --sandbox <fedid>/<sandbox_name>

  5. aq make --hostname <hostname> --personality <ceph-gw-echo/geph-webdav-gw-echo>

  6. check tail -f /var/log/ncm-cdispd.log

  7. run quattor-fetch && quattor-configure --all in case of pending locks or too many calls errors

  8. git add and commit

  9. run sandbox_publish_with_rebase

Edit dockerfile

GitHub - stfc/grid-workernode

Build and test container on a single WN

Copy over a copy of the grid-workernode repository with the changes to test (you can scp the compressed folder into the WN)

Set up the exports as the following article:

Running tests on a WN

then run

$ cd xrootd
$ docker build -t <testname> ./

this will build a docker image for the container.

Contact ceph team to build and mount the container from that image. Following that, you can follow the instructions on that article for running the test.

Limited production rollout / test on WN tranche

contact ceph team once the image is ready to rollout on an available tranche

Full production rollout

if no abnormalities are observed in monitoring, contact ceph team for a full rollout

Monitoring

Vande

Icinga

Mimic