Current Sandbox:
http://aquilon.gridpp.rl.ac.uk/sandboxes/diff.php?sandbox=jw-gateway-xrootd-cmsd
Operational items
Know issues / limitations
N/A
Manager hosts
Frontend:
https://rdr.echo.stfc.ac.uk:1094 root://rdr.echo.stfc.ac.uk:1094 echo-manager01.gridpp.rl.ac.uk echo-manager02.gridpp.rl.ac.uk
Restarting services
systemctl restart xrootd@{unified,tpc} systemctl restart cmsd@unified
Blacklisting of server (gateway) hosts
On each of the manager hosts the following file should be used, and the relevant gateway host included:
/etc/xrootd/cms.blacklist
add the given host on a single line (wildcards are in principle also ok).
This file is re-read on a per-minute basis, and requires no restart of services
if a host in the blacklist does not exist, the blacklist will fail to parse and will be ignored after a service restart
ensure the xrootd:xrootd ownership is set for it
Adding a new Server (Gateway host) to the cluster
When a new Gateway needs to be added to a cluster, the following steps (in addition to the usual set of checks for ensuring a fully functional gateway) are required.
Ensure the host has the correct personality (i.e. ceph-unified-gw-echo)
In Aquilon the manager hosts must be recompiled, in order to find the new host, and update the managers that the new host is available.
As we have a pair of managers, it is preferable to (using keepavlived) remove one manager, compile it, check it restarts services correctly, and (using keepalived) add it back.
(this may require some quattor commands on the host to force the compilation to be deployed immediately).Then, repeat this step for the second manager.
Finally, check the cms.blacklist blocklist files on each manager to ensure that the new Server (aka Gateway) is not explicitly excluded from the cluster here.
Development items
Services
A new service has been created to hold the list of manager hosts for each ceph instance (e.g. echo)
xrootd-clustered
For Echo, the specific instance of this service is called xrootd-clustered-echo
These are added with
aq add_required_service --service xrootd-clustered --archetype ral-tier1 --personality ceph-unified-gw-echo aq add_required_service --service xrootd-clustered --archetype ral-tier1 --personality ceph-unified-gw-echo-test aq add_required_service --service xrootd-clustered --archetype ral-tier1 --personality ceph-xrootd-manager-echo-test
A host may need to be reconfigured in order to get the new service included in it, and a couple might fail unless this is done; e.g.
aq reconfigure --hostname ceph-gw14.gridpp.rl.ac.uk --personality ceph-unified-gw-echo --archetype ral-tier1
Xrootd and CMSD configuration
The configuration for xrootd and csmd is stored in the xrootd-unified.cfg configuration file (and the additional xrootd-tpc.cfg - for root TPC transfers).
keepalived
The keepalived configuration for the manager CMSD hosts is here:
features/keepalived/echo-managers
A summary of the main files:
vrrp-instance : the label of the script xrootd that will determine the state of the hosts
global: the content of the check script xrootd and notification email
config.pan: the floating IP addresses, and specifics of the failover priorities
Manager cluster setup
aq add_personality --personality ceph-xrootd-manager-echo-test --eon_id 14 --copy_from ceph-unified-gw-echo-test --archetype ral-tier1
aq add_cluster --cluster xrootd_manager_echo --archetype ral-tier1-clusters --personality keepalived --down_hosts_threshold 1 --campus harwe ll --sandbox orl67423/jw-gateway-xrootd-cmsd aq cluster --cluster xrootd_manager_echo --hostname echo-manager01.gridpp.rl.ac.uk --personality ceph-xrootd-manager-echo-test aq cluster --cluster xrootd_manager_echo --hostname echo-manager02.gridpp.rl.ac.uk --personality ceph-xrootd-manager-echo-test aq compile --cluster xrootd_manager_echo aq make --hostname echo-manager02.gridpp.rl.ac.uk && aq make --hostname echo-manager01.gridpp.rl.ac.uk