ThirdPartyCopy

Introduction

The third-party-copy sub-group of the DOMA working group is dedicated to improving bulk transfers between WLCG sites. Currently, the focus is on finding viable replacements to the GridFTP protocol. This is proceeding along in the following three phases:

  • Phase 1 (deadline 31 December 2018): Survey available replacement protocols. Common storage implementations (EOS, DPM, dCache, standalone Xrootd, StoRM) aim to have at least one production site enable a non-GridFTP third-party-copy. Compatibility and performance tests are performed.
  • Phase 2 (deadline 30 June 2019): All sites providing more than 3PB of storage to WLCG experiments are required to have one non-GridFTP endpoint in production.
  • Phase 3 (deadline 31 December 2019): All sites providing storage to WLCG experiments must provide a non-GridFTP endpoint.

One page plan or mandate is also in PDF

To join the group, please subscribe to the following wlcg-doma-tpc e-group. We are actively looking for additional sites - particularly production sites - to participate in the group.

Meetings can be found in the DOMA indico category

Participants

The DOMA sub-group has the following participants (please add your own name!):

  • Alessandra Forti (ATLAS, co-coordinator)
  • Brian Bockelman (CMS, co-coordinator)
  • Mario Lassnig (ATLAS, Rucio)
  • Thomas Beermann (ATLAS, Rucio)
  • Andrea Manzi (FTS, gfal2)
  • Edward Karavakis (FTS, gfal2)
  • Paul Millar (dCache)
  • Fabrizio Furano (DPM)
  • Andy Hanushevsky (xrootd)
  • Andrea Ceccanti (StoRM)
  • Xavier Espinal (CERN, LCG)
  • Wei Yang (ATLAS, xrootd)
  • Horst Severini (ATLAS, xrootd)
  • Dmitry Litvintsev (Fermilab, dCache)
  • Albert Rossi (Fermilab, dCache)
  • Shawn McKee (AGLT2, dCache)
  • Pepe Flix (PIC, CMS)
  • Saul Youssef (NET2, ATLAS)
  • James Walder (ATLAS, RAL)

Current Activities

The third-party-copy subgroup is working on two potential TPC protocols for the WLCG:

The above links contain information about progress in deployment and testing.

For automated testing of participating sites, we have setup a DOMA-specific Rucio instance.

Monitoring

Rucio has a Kibana instance associated that can be used to create dashboards targeted to specific monitoring. For example if we want to monitor checksums, or DPM to dCache-only transfers etc....

The GitHub repository above contains python code and example .json configurations for running xrootd TPC tests using a locally installed xrdcp client. The tests can be run in two different modes: (a) using a reference server, where a file is uploaded to an endpoint, then TPC is run to and then back from the reference server, ending with a download of the resulting copy; (b) bidirectional TPC between each endpoint in the configuration (N^2), which is essentially like the functional tests. The GitHub README contains information as to how to set the tests up. Feel free to contact arossi@fnalNOSPAMPLEASE.gov with further questions.

Open issues in the FTS monitoring

  • Unlike the Kibana pages, the FTS monitoring page does not have a X.509 certificate issued by a trusted CA. The CERN CA certs need to be installed on the browser, https://cafiles.cern.ch/cafiles/

Open issues in the Kibana monitoring

  • In the "Rucio DOMA - Transfer Overview" page it's unclear whether the results show PUSH or PULL transfers (or does FTS attempt both?).
    • FTS tries both, but recent gfal2 can be configured use just "HTTP PLUGIN:DEFAULT_COPY_MODE=3rd pull" or "HTTP PLUGIN:DEFAULT_COPY_MODE=3rd push"
  • In "Rucio DOMA - Failed Transfer Details" it would be helpful if the payload.transfer-link URL was an active link that took the browser to that page.

Storages and Protocols Configurations

Storage system specific information on third-party copy support, including which version(s) to install and how to configure the service.

Sites and contacts

Site Protocol Storage Storage Version Contact(s) Experiment supported test/prod protocol://hostname:port/path/
NET2 http
xrootd
GPFS XRootD 5.3.0 Saul Youssef,
Augustine Abaris,
Wei Yang
ATLAS test http://atlas-dtn4.bu.edu:1094/gpfs1/tpctest (dteam)
NET2 http
xrootd
CephFS XRootD 5.3.0 Saul Youssef,
Augustine Abaris,
Wei Yang
ATLAS test http://atlasgw01.nese.rc.fas.harvard:1094/atlas/ops/tpctest (dteam)
RAL-LCG2 http
xrootd
Echo XRootD 4.9.1 Katy Ellis
James Walder
ATLAS, CMS, LHCb, DUNE, dteam prod/test Echo production gateway: root://xrootd.echo.stfc.ac.uk/dteam:test/
Echo production gateway (workaround for XRootD <4.11): root://echo.stfc.ac.uk/dteam:test/
Test gateway for DOMA: https://ceph-gw8.gridpp.rl.ac.uk:1094/dteam:test/
Old DOMA test gateway https://ceph-test-gw683.gridpp.rl.ac.uk:1094/dteam:test/
Production gateway to be setup: https://xrootd.echo.stfc.ac.uk/dteam:test/
S3 is separate from the XRootD endpoint. To use S3 directly: https://s3.echo.stfc.ac.uk/{your-bucket} (credentials available by request).
PRAGUELCG2 http
xrootd
DPM 1.14.1 Petr Vokac ATLAS, DUNE, dteam, wlcg prod https://golias100.farm.particle.cz:443/dpm/farm.particle.cz/home/dteam/tpc-test
root://golias100.farm.particle.cz:1094/dpm/farm.particle.cz/home/dteam/tpc-test
gsiftp://golias100.farm.particle.cz:2811/dpm/farm.particle.cz/home/dteam/tpc-test
https://golias100.farm.particle.cz:443/dpm/farm.particle.cz/home/wlcg
root://golias100.farm.particle.cz:1094/dpm/farm.particle.cz/home/wlcg
gsiftp://golias100.farm.particle.cz:2811/dpm/farm.particle.cz/home/wlcg
UKI-SCOTGRID-GLASGOW http
xrootd
DPM 1.8.10 Sam Skipsey ATLAS, dteam prod https://svr018.gla.scotgrid.ac.uk/dpm/gla.scotgrid.ac.uk/home/atlas/atlasscratchdisk/
https://svr018.gla.scotgrid.ac.uk/dpm/gla.scotgrid.ac.uk/home/dteam/
Note: test more bleeding edge DPM instance will be added in near future, this is the production DPM
UKI-LT2-Brunel http
xrootd
DPM 1.13.0 Duncan Rand, Raul Lopes ATLAS, CMS, dteam prod https://dc2-grid-64.brunel.ac.uk/dpm/brunel.ac.uk/home/dteam/wlcg-tpc
root://dc2-grid-64.brunel.ac.uk:1094/dpm/brunel.ac.uk/home/dteam/wlcg-tpc
UKI-NORTHGRID-MAN-HEP http
xrootd
DPM 1.13.0 Alessandra Forti ATLAS, LHCb, dteam, SKA test https://vm33.in.tier2.hep.manchester.ac.uk:443/dpm/tier2.hep.manchester.ac.uk/home/dteam
root://vm33.in.tier2.hep.manchester.ac.uk:1094/dpm/tier2.hep.manchester.ac.uk/home/dteam
UKI-NORTHGRID-MAN-HEP http
xrootd
DPM 1.13.0 Alessandra Forti ATLAS, LHCb, dteam, SKA prod https://bohr3226.tier2.hep.manchester.ac.uk:443/dpm/tier2.hep.manchester.ac.uk/home/dteam
root://bohr3226.tier2.hep.manchester.ac.uk:1094/dpm/tier2.hep.manchester.ac.uk/home/dteam
UKI-NORTHGRID-LANCS-HEP http
xrootd
DPM 1.13.0 Matt Doidge ATLAS, dteam prod https://fal-pygrid-30.lancs.ac.uk:443/dpm/lancs.ac.uk/home/dteam
root://fal-pygrid-30.lancs.ac.uk:1094/dpm/lancs.ac.uk/home/dteam
TOKYO-LCG2 http
xrootd
DPM 1.12.1 Tomoe Kishimoto ATLAS, dteam prod https://lcg-se01.icepp.jp:18443/dpm/icepp.jp/home/dteam
root://lcg-se01.icepp.jp:1094/dpm/icepp.jp/home/dteam
gsiftp://lcg-se01.icepp.jp:2811/dpm/icepp.jp/home/dteam
CERN DPM trunk testbed[*3][*2] gsiftp
http
xrootd
DPM 1.13.1 Fabrizio Furano ATLAS, CMS, LHCb, dteam test Base path:
https://dpmhead-trunk.cern.ch/dpm/cern.ch/home/
gsiftp://dpmhead-trunk.cern.ch/dpm/cern.ch/home/
root://dpmhead-trunk.cern.ch//dpm/cern.ch/home/
dteam:
https://dpmhead-trunk.cern.ch/dpm/cern.ch/home/dteam
ATLAS path:
https://dpmhead-trunk.cern.ch/dpm/cern.ch/home/atlas/domatests
gsiftp://dpmhead-trunk.cern.ch/dpm/cern.ch/home/atlas/domatests
root://dpmhead-trunk.cern.ch//dpm/cern.ch/home/atlas/domatests
CERN DPM release candidate testbed[*2] gsiftp
http
xrootd
DPM 1.13.2 Fabrizio Furano ATLAS, CMS, LHCb, dteam test Base path:
https://dpmhead-rc.cern.ch/dpm/cern.ch/home/
gsiftp://dpmhead-rc.cern.ch/dpm/cern.ch/home/
root://dpmhead-rc.cern.ch//dpm/cern.ch/home/
dteam:
https://dpmhead-rc.cern.ch/dpm/cern.ch/home/dteam
ATLAS path:
https://dpmhead-rc.cern.ch/dpm/cern.ch/home/atlas/domatests
gsiftp://dpmhead-rc.cern.ch/dpm/cern.ch/home/atlas/domatests
root://dpmhead-rc.cern.ch//dpm/cern.ch/home/atlas/domatests
DESY-prometheus [*] gsiftp
http
xrootd
dCache 5.1.0-SNAPSHOT Paul Millar ATLAS, CMS, LHCb, ALICE, dteam test https://prometheus.desy.de:2443/VOs/atlas
https://prometheus.desy.de:2443/VOs/cms
https://prometheus.desy.de:2443/VOs/dteam
...etc...
root://prometheus.desy.de:1095/VOs/atlas
root://prometheus.desy.de:1095/VOs/cms
root://prometheus.desy.de:1095/VOs/dteam
...etc...

DESY-DOMA gsiftp
http
dCache 5.2.1 Christian Voss (& Paul Millar) ATLAS, CMS, LHCb, ALICE, dteam test https://dcache-se-doma.desy.de:2880/dteam
https://dcache-se-doma.desy.de:2880/atlas
https://dcache-se-doma.desy.de:2880/cms
https://dcache-se-doma.desy.de:2880/lhcb
AGLT2 http
xrootd
dCache 5.2.16 Shawn McKee ATLAS prod https://head01.aglt2.org:2880/pnfs/aglt2.org/dteam/
root://xrootd.aglt2.org:1094/pnfs/aglt2.org/dteam
BNL xrootd dCache (xrootd proxy) ?
5.2.x
Hironori Ito
Jane Liu
ATLAS prod
dteam
root://dcdoor16.usatlas.bnl.gov//pnfs/usatlas.bnl.gov/users/hiroito/testtpc
root://dcachetest04.usatlas.bnl.gov:1096//pnfs/usatlas.bnl.gov/data/dteam/
UKI-LT2-IC-HEP http dCache 3.2.39 Duncan Rand CMS, dteam prod cms:
https://gfe02.grid.hep.ph.ic.ac.uk:2880/pnfs/hep.ph.ic.ac.uk/data/cms/store/test/davs
dteam:
https://gfe02.grid.hep.ph.ic.ac.uk:2880/pnfs/hep.ph.ic.ac.uk/data/dteam/wlcg-tpc
PIC http
xrootd
dCache 5.2.16 Pepe Flix ATLAS, CMS, dteam, wlcg prod [http]
dteam: https://webdav-dteam.pic.es:8448/tpc-test/
ATLAS: https://webdav-at1.pic.es:8446/tpc-test/
CMS: https://door03.pic.es:8459/tpc-test
wlcg: https://door02.pic.es:8452/
[xrootd]
dteam: root://xrootd.pic.es//pnfs/pic.es/data/dteam/tpc-test/
ATLAS: root://xrootd-at1-door.pic.es
CMS: root://xrootd-cms-door.pic.es//pnfs/pic.es/data/cms/tpc-test
wlcg: root://xrootd.pic.es//pnfs/pic.es/data/wlcg
CERN EOS xrootd
https
gsiftp
EOS 4.7.x (unreleased yet) Elvin Sindrilaru, Andreas Peters, Xavier Espinal ATLAS, ALICE, CMS, LHCb, dteam test root://eospps.cern.ch//eos/opstest/tpc/
https://eospps.cern.ch:9000//eos/opstest/tpc/
INFN-T1 http StoRM WebDAV 1.1.0-SNAPSHOT Lucia Morganti, Andrea Ceccanti - INFN-CNAF ATLAS (VO dteam) prod https://xfer.cr.cnaf.infn.it:8443/dteam
INFN-NAPOLI-ATLAS http
xrootd
gsiftp
DPM 1.13.2 Alessandra Doria ATLAS, dteam, lhcb prod https://t2-dpm-01.na.infn.it:443/dpm/na.infn.it/home/dteam
root://t2-dpm-01.na.infn.it:1094/dpm/na.infn.it/home/dteam
gsiftp://t2-dpm-01.na.infn.it:2811/dpm/na.infn.it/home/dteam
UKI-LT2-QMUL http StoRM   Duncan Rand, Dan Traynor ATLAS, dteam prod https://se03.esc.qmul.ac.uk:8443/atlasdatadisk
https://se03.esc.qmul.ac.uk:8443/atlasscratchdisk
https://se03.esc.qmul.ac.uk:8443/cms
https://se03.esc.qmul.ac.uk:8443/lhcb
https://se03.esc.qmul.ac.uk:8443/dteam
SLAC https
xrootd
xrootd 5.3.0 Wei Yang ATLAS, dteam prod/test ATLAS: root://osggridftp01.slac.stanford.edu:2094//xrootd/atlas/atlas{data,scratch,...}disk
https://osggridftp01.slac.stanford.edu:2094//xrootd/atlas/atlas{data,scratch,...}disk
dteam: root://osggridftp01.slac.stanford.edu:2094//xrootd/atlas/tpctest
https://osggridftp01.slac.stanford.edu:2094//xrootd/atlas/tpctest
OU https
xrootd
gsiftp
xrootd 5.3.1 (4.12.1 on backend storage) Horst Severini ATLAS, dteam, wlcg prod ATLAS:
https:// and root://se1.oscer.ou.edu//xrd/atlas{data, scratch}disk/
dteam:
https:// and root://se1.oscer.ou.edu//xrd/dteam/doma/
wlcg:
https:// and root://se1.oscer.ou.edu//xrd/wlcg/doma/
SWT2_CPB https
xrootd
xrootd 5.3.1 Patrick McGuigan ATLAS, dteam, wlcg prod ATLAS: {https, root}://gk06.atlas-swt2.org//xrd/[datadisk,atlasscratchdisk]
DTEAM: {https, root}://gk06.atlas-swt2.org//xrd/dteam
WLCG: {https, root}://gk06.atlas-swt2.org//xrd/wlcg
Nebraska xrootd, HTTP, GridFTP HDFS 4.9.0 pre-release Brian Bockelman CMS, dteam test CMS
root://red-gridftp12.unl.edu:1094/store/test/
https://red-gridftp12.unl.edu:1094/store/test/
gsiftp://red-gridftp12.unl.edu/user/uscms01/pnfs/unl.edu/data4/cms/store
dteam
root://red-gridftp12.unl.edu:1094/user/dteam/
https://red-gridftp12.unl.edu:1094/user/dteam/
gsiftp://red-gridftp12.unl.edu/user/dteam/
[*5]
UNI-BONN xrootd and xrootd-HTTP CephFS 5.6.2 Oliver Freyermuth, Michael Hübner ATLAS, dteam, wlcg prod ATLAS Rucio endpoint in production ( ATLAS-CRIC, WLCG-CRIC)
Testing possible via:
root://xrootd.physik.uni-bonn.de//cephfs/grid/atlas/user/scratch/ and
https://xrootd.physik.uni-bonn.de:1094//cephfs/grid/atlas/user/scratch/ [*6]
DTEAM endpoints:
root://xrootd.physik.uni-bonn.de//cephfs/grid/dteam/
https://xrootd.physik.uni-bonn.de:1094//cephfs/grid/dteam/ [*7]
WLCG endpoints (can be used with tokens / OIDC and X.509):
root://xrootd.physik.uni-bonn.de//cephfs/grid/wlcg/
https://xrootd.physik.uni-bonn.de:1094//cephfs/grid/wlcg/ [*7]
FNAL gsiftp
https
xrootd
dCache 5.0.0-SNAPSHOT Albert L. Rossi, Dmitry Litvintsev dteam test dteam
Testbed endpoint root://stkendca06a.fnal.gov:1094//pnfs/fnal.gov/VOs/dteam
https://stkendca06a.fnal.gov:2880/pnfs/fnal.gov/VOs/dteam/
For the stress tests, use the URLS
root://stkendca06a.fnal.gov:1094//pnfs/fnal.gov/VOs/dteam/stress
https://stkendca06a.fnal.gov:2880/pnfs/fnal.gov/VOs/dteam/stress
These offer persistent on-disk storage (not garbage collected).
BEIJING-LCG2 http
xrootd
gridftp
DPM
dCache
1.10.5
2.16.22
Xiaofei Yan, Xiaomei Zhang ATLAS, CMS, LHCb, dteam test/prod dteam:
https://dpmtest.ihep.ac.cn/dpm/ihep.ac.cn/home/dteam/
root://dpmtest.ihep.ac.cn:1094/dpm/ihep.ac.cn/home/dteam/
gsiftp://cmspn001.ihep.ac.cn:2811/pnfs/ihep.ac.cn/data/dteam
CMS:
root://seadmin.ihep.ac.cn:1094//pnfs/ihep.ac.cn/data/cms/store/test
DYNA-CERN-CLOUD https dynafed 1.4.0 Frank Berghaus ATLAS, DTEAM prod https://dynafed-atlas.cern.ch:443/data/cloud/tpc
DYNA-CERN-GRID https dynafed 1.4.0 Frank Berghaus ATLAS, DTEAM prod https://dynafed-atlas.cern.ch:443/data/grid/tpc
IN2P3 https
xrootd
dCache 8.2.8 Adrien Georget Atlas, dteam, LHCb prod dteam: https://ccdcatli458.in2p3.fr:2880/dteam
Atlas: https://ccdavatlas.in2p3.fr:2880/
root://ccxrdatlastpc.in2p3.fr:1094/pnfs/in2p3.fr/data/atlas/
LHCb https://ccdavlhcb.in2p3.fr:2880/lhcb
IN2P3 -Test https dCache 8.2.2 Adrien Georget dteam, ATLAS test dteam : https://ccdcalitest11:2880/
Atlas: root://ccdcalitest11.in2p3.fr:1094/pnfs/in2p3.fr/atlas/
Atlas: https://ccdcalitest11.in2p3.fr:2880/pnfs/in2p3.fr/data/atlas/
IN2P3 -DOMA xrootd Xrootd 4.11.0 Yvan Calas, Eric Fede dteam test dteam: root://ccxrdli283.in2p3.fr:1094//xrootd/in2p3.fr/disk/dteam/doma
Brussels https
xrootd
gsiftp
dCache 4.2.18 Olivier Devroede dteam prod https://maite.iihe.ac.be:2880/pnfs/iihe/dteam
root://maite.iihe.ac.be/pnfs/iihe/dteam
gsiftp://maite.iihe.ac.be/pnfs/iihe/dteam
Florida https Xrootd 4.9.0-rc2 Bockjoo Kim dteam prod? https://cmsio3.rc.ufl.edu:1094/store/user/dteam
SURFsara production gsiftp
https
dCache 5.2.4 Onno Zweers, Alexander Verkooijen Atlas, LHCb, Alice, dteam prod https://webdav.grid.surfsara.nl:2882/pnfs/grid.sara.nl/data/dteam/DOMA-TPC (for webdav door properties including alternative ports, see http://doc.grid.surfsara.nl/en/latest/Pages/Advanced/storage_clients/webdav.html#available-webdav-doors)
SURFsara test gsiftp
https
(xrootd untested)
dCache 6.0.0.bb39761 snapshot Onno Zweers, Alexander Verkooijen dteam test https://dolphin12.grid.surfsara.nl:2882/groups/dteam/DOMA-TPC (webdav doors identical to the above production instance)
NDGF https dCache 5.0.5 Jens Larsson (& Vincent Garonne) dteam prod https://dav.ndgf.org:443/dteam/tpc/
NDGF-PREPROD https dCache 5.0.5 Jens Larsson (& Vincent Garonne) dteam test https://preprod-srm.ndgf.org:443/dteam/tpc/
LRZ-LMU https dCache 4.2.23 Christoph Anton Mitterer
Günter Duckeck
dteam prod https://lcg-lrz-http.grid.lrz.de/pnfs/lrz-muenchen.de/data/atlas/atlasdynafeddisk/tpc/domatest
TRIUMF gsiftp
https
xrootd
dCache 5.2.34 Xinli Liu ATLAS, dteam test dteam:
https://webdav.lcg.triumf.ca:2880/dteam
https://pps05.lcg.triumf.ca:2880/dteam
root://pps05.lcg.triumf.ca:1094/dteam
gsiftp://pps05.lcg.triumf.ca:2811/dteam
ATLAS:
https://webdav.lcg.triumf.ca:2880/atlas
https://pps05.lcg.triumf.ca:2880/atlas
root://pps05.lcg.triumf.ca:1094/atlas
gsiftp://pps05.lcg.triumf.ca:2811/atlas
Caltech xrootd, HTTP, GridFTP HDFS 4.10.0-1.osg34.el7 t2admin@hepNOSPAMPLEASE.caltech.edu CMS, dteam prod? CMS
root://xrootd.ultralight.org:1094/store/test/
https://xrootd.ultralight.org:1094/store/test/
gsiftp://transfer.ultralight.org//mnt/hadoop/store/test
dteam
root://xrootd.ultralight.org:1094/store/user/dteam/
https://xrootd.ultralight.org:1094/store/user/dteam/
gsiftp://transfer.ultralight.org//mnt/hadoop/store/user/dteam/
UVic HTTP Dynafed 1.6.0 Marcus Ebert

Fernando Fernandez Galindo
Tristan Sullivan
DTEAM, Belle-II, ATLAS prod dteam https://dynafed02.heprc.uvic.ca:8443/dteam/
Belle-II https://dynafed02.heprc.uvic.ca:8443/belle/
ATLAS https://dynafed-atlas.heprc.uvic.ca/dynafed/atlas/
dteam https://dynafed-atlas.heprc.uvic.ca/dynafed/dteam/
KIT HTTP dCache 5.2.9 Samuel Ambroj Pérez, Xavier Mol dteam test https://pps-single-webdav-kit.gridka.de:2880/pnfs/gridka.de/tpc/dteam
UCSD xrootd, HTTP, GridFTP HDFS 4.11.1-1.osg34.el7 Diego Davila
Edgar Fajardo
CMS, dteam test CMS
root://redirector.t2.ucsd.edu:1094//store/temp/user/
https://redirector.t2.ucsd.edu:1094//store/temp/user
gsiftp://gftp.t2.ucsd.edu/cms/store/user/
dteam
root://redirector.t2.ucsd.edu:1094//store/user/dteam/
https://redirector.t2.ucsd.edu:1094//store/user/dteam
gsiftp://gftp.t2.ucsd.edu/cms/store/user/dteam
MWT2 http dCache 5.2.15 Judith Stephen ATLAS prod https://webdav.mwt2.org:2881/dteam

TAIWAN-LCG2 httpd, xrootd DPM 1.13.2 Chien-De Li ATLAS, dteam prod root://f-dpm000.grid.sinica.edu.tw//dpm/grid.sinica.edu.tw/home/dteam/
root://f-dpm000.grid.sinica.edu.tw//dpm/grid.sinica.edu.tw/home/atlas/atlasscratchdisk/
https://f-dpm000.grid.sinica.edu.tw//dpm/grid.sinica.edu.tw/home/dteam/
https://f-dpm000.grid.sinica.edu.tw//dpm/grid.sinica.edu.tw/home/atlas/atlasscratchdisk/
Wisconsin xrootd, HTTP, GridFTP HDFS 4.10.1 Ajit Mohapatra
Carl Vuosalo
CMS, dteam test/prod CMS
root://pubxrootd.hep.wisc.edu//store/temp/user
davs://pubxrootd.hep.wisc.edu:1094//store/temp/user
gsiftp://cms-lvs-gridftp.hep.wisc.edu//hdfs//store/temp/user
dteam
root://pubxrootd.hep.wisc.edu//osg/vo/dteam
davs://pubxrootd.hep.wisc.edu:1094//osg/vo/dteam
gsiftp://cms-lvs-gridftp.hep.wisc.edu//hdfs/osg/vo/dteam
Vanderbilt xrootd, HTTP, gsiftp LStore r683 Andrew Melo dteam test gsiftp://gridftp-vanderbilt.sites.opensciencegrid.org/store/user/dteam
davs://xrootd-vanderbilt.sites.opensciencegrid.org:1094//store/user/dteam
root://xrootd-vanderbilt.sites.opensciencegrid.org:1094//store/user/dteam
[*] Machine rebuilt daily at 06:00 CEST, limited capacity available.
[*2] Limited capacity available.
[*3] May have experimental features
[*5] The red-gridftp12.unl.edu server has higher levels of logging and is simpler for debugging transfer failures. For scale work, replace it with the load-balanced hostname, xrootd-local.unl.edu.
[*6] Limited space (~ 100 TB) available for the scratch endpoints, files not accessed for 7 days are purged each night.
[*7] Limited space (~ 10 TB) available

Features vs Storages and Protocols matrix

Storage/Protocol Server authentication Query checksum Upload-with-checksum X.509 delegation Bearer token
dCache/http No Yes Available in 5.0* Yes Yes
dCache/xrootd Yes Yes No No No
DPM/http No DOME 1.10.4 No Yes Yes
DPM/xrootd   DOME 1.11.0 No Yes No (default)
echo/http          
echo/xrootd          
EOS/http Yes Yes Yes Yes Yes
EOS/xrootd Yes Yes Yes Yes Yes
StoRM/http Available in 1.1.0* Yes   Available in 1.1.0* Available in 1.1.0*
xrootd/http No Yes No No Yes
xrootd/xrootd Yes Yes No Yes No
Dynafed/http Yes Yes Depends** Yes OIDC: yes, Macaroons**

Server authentication
the data-bearing TCP connection is authenticated using a non-delegated X.509 credential; for example, a host credential or a robot credential.
Query checksum
the ability to query a file's checksum; for example, xrootd kXR_query (with kXR_Qcksum) or HTTP RFC 3230 (Want-Digest request header in HTTP GET or HEAD requests).
Upload-with-checksum
the ability to include a file's checksum when uploading that file's content, with the server rejecting the upload if the content is corrupt; for example, RFC 1544 (Content-MD5 request header in HTTP PUT request).
X.509 delegation
the ability for a client to delegate an X.509 credential to the server, which the server then uses to authenticate the data-bearing TCP connection; for example, GridSite delegation or xrootd GSI delegation.
Bearer token
as the passive server, the server issues a bearer token that the TPC client may pass to the active server. This token is then used to authorise the data transfer. Macaroons and OAuth2 access tokens are examples of such tokens.

*: currently in pre-release **: Depending on the storage backend that Dynafed is federating

Proposed evaluation criteria

The following is a suggestion for how we can evaluate different protocol options. These criteria have not yet been ratified.

The criteria are split into two sections: Requirements and Desirable. The required criteria must be satisfied before a protocol is acceptable, while the desirable criteria are optional.

Requirements:

These criteria must be satisfied.

R1. Well documented

GridFTP builds on a very well establish protocol (FTP), by adding extensions that are well-documented. The OGF standardisation process has yielded both "nominative" (GDF.20 & GDF.47) and "informative" documents (GDF...)

Any replacement protocol must have an equivalent level of documentation.

R2. Multiple implementations

We currently enjoy multiple completely independent implementations of the GridFTP protocol, with various server and client software, used in production, that understand the GridFTP protocol.

Multiple implementations is commonly required by standardisation bodies. It brings several advantages, including a test of the documentation, and avoids a monoculture (and the problems that that brings).

Any replacement protocol must have multiple independent implementations.

R3. Secure as GridFTP

(This is almost an anti-requirement!)

When establishing a GridFTP transfer between sites, the control channel is encrypted, but the data channel is not. Therefore, there is no data privacy and somewhat weaken data integrity.

For example, if an adversary happens to learn the transferred file's ADLER32 checksum and the target data endpoint then she could "injecting" carefully crafted corrupted data, provided the rogue TCP connection is established before the genuine TPC source server connects.

Any replacement protocol must have at least this level of security.

R4. Support multi-VO storage systems

Many WLCG sites have a single storage system that supports multiple VOs. This works fine with GridFTP, as any TPC transfer is authorized based solely on the identity the client presents.

Any replacement protocol must not require sites to run independent storage systems for different VOs.

Desirable:

Although not required, it would be good if a protocol satisfies these criteria.

D1. Improved security

Although GridFTP protocol provides some support encrypted data channels, WLCG has (historically) not used this feature. This decision may be revised in the future.

Although that decision lies outside the mandate of this group, it is something we may consider when choosing a replacement protocol.

Additionally, there are WLCG sites also support non-WLCG communities that have a stronger data secrecy and data integrity requirements than WLCG.

A protocol that provides a stronger data security model may be beneficial to WLCG by:

a) allowing a smooth transition to a stronger integrity and privacy data model,

b) providing an economy-of-scale: more users implies better and more widely adopted software.

It is desirable that any replacement protocol supports transferring data with stronger integrity and privacy than currently available through GridFTP.

D2. Support universal endpoints

Currently, if a storage system supports multiple GridFTP endpoints then all users (from any VO) can use any of these GridFTP endpoints to initiate a TPC transfer. In other words, endpoints are not required to be separated by VO.

It is desirable that any replacement protocol allows an implementation to support all its users with a common pool of endpoints.

D3. Support non-X.509

Considerable effort is underway in exploring and charting alternative authentication models from X.509. Examples of these include EduGain /SAML, OpenID -Connect, https://scitokens.org and Macaroons.

It is desirable that any replacement protocol support transferring data without requiring X.509.

D4. Works with industry

Currently WLCG requires both source and destination TPC endpoints support the GridFTP protocol. However, data is increasingly available through non-FTP protocols; for example, data stored in Amazon S3 storage is available directly using HTTP.

It is desirable that any replacement protocol should support transferring data to industry standard endpoints.

Publications

Edit | Attach | Watch | Print version | History: r272 < r271 < r270 < r269 < r268 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r272 - 2023-10-06 - OliverFreyermuth
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback