ARC-CE token configuration tips
ARC-CE version
Although
ARC-CE REST and also basic token support is available for quite some time (since 6.9 / 6.6) it is recommended to use latest stable ARC-CE version either from EPEL or UMD repository.
Enabling REST interface
- Enable ARC Web Service interface:
- Add
[arex/ws]
and [arex/ws/jobs]
blocks in /etc/arc.conf
- In
[arex/ws/jobs]
add allowaccess
options to enable the same authgroups configured in [gridftpd/jobs]
- Enable ARC-REST service with
arcctl service enable --service arc-arex-ws
followed by arcctl service enable --now -a
Firewall configuration
ARC-CE web services by default listen on standard HTTPS port 443. This port must be reachable for all clients that manage ARC-CE jobs.
Test job submission using REST API
Make sure the ARC client has the package
nordugrid-arc-plugins-arcrest
or
nordugrid-arc6-plugins-arcrest
package installed or submission will fail.
Details about supported endpoints should include
org.nordugrid.arcrest
interface
$ arcinfo -c arc1.farm.particle.cz
Computing service: (production)
Information endpoint: ldap://arc1.farm.particle.cz:2135/Mds-Vo-Name=local,o=grid (org.nordugrid.ldapng)
Information endpoint: https://arc1.farm.particle.cz:443/arex (org.nordugrid.arcrest)
Information endpoint: https://arc1.farm.particle.cz:443/arex (org.ogf.glue.emies.resourceinfo)
Submission endpoint: https://arc1.farm.particle.cz:443/arex (status: ok, interface: org.nordugrid.arcrest)
Submission endpoint: https://arc1.farm.particle.cz:443/arex (status: ok, interface: org.ogf.glue.emies.activitycreation)
Submission endpoint: gsiftp://arc1.farm.particle.cz:2811/jobs (status: ok, interface: org.nordugrid.gridftpjob)
Submit and manage test job using REST API
$ arcproxy --voms=atlas:/atlas
Enter pass phrase for private key:
Your identity: /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=vokac/CN=610071/CN=Petr Vokac
Contacting VOMS server (named atlas): voms-atlas-auth.app.cern.ch on port: 443
Proxy generation succeeded
Your proxy is valid until: 2022-03-24 11:30:42
$ arctest -d WARNING -c arc1.farm.particle.cz -J 1 -S org.nordugrid.arcrest
Submitting test-job 1:
&( executable = "run.sh" )( arguments = "5" )( stdout = "stdout" )( stderr = "primenumbers" )( cputime = "PT8M" )( inputfiles = ( "run.sh" "http://www.nordugrid.org:80;cache=no/data/run.sh" ) ( "Makefile" "http://download.nordugrid.org:80;cache=no/applications/test/Makefile" ) ( "prime.cpp" "http://download.nordugrid.org:80;cache=no/applications/test/prime.cpp" ) )( outputfiles = ( "primenumbers" "" ) )( gmlog = "gmlog" )( jobname = "arctest1" )( clientxrsl = "&( executable = ""run.sh"" )( arguments = ""5"" )( inputfiles = ( ""run.sh"" ""http://www.nordugrid.org;cache=no/data/run.sh"" ) ( ""Makefile"" ""http://download.nordugrid.org;cache=no/applications/test/Makefile"" ) ( ""prime.cpp"" ""http://download.nordugrid.org;cache=no/applications/test/prime.cpp"" ) )( stderr = ""primenumbers"" )( outputfiles = ( ""primenumbers"" """" ) )( jobname = ""arctest1"" )( stdout = ""stdout"" )( gmlog = ""gmlog"" )( cputime = ""8"" )" )
Client version: nordugrid-arc-6.14.0
Test submitted with jobid: https://arc1.farm.particle.cz:443/arex/rest/1.0/jobs/kIbLDmmWOr0nnoBGSqYX2MjntwGI2oABFKDm3YFKDmCBFKDmMvSFQo
Computing service: (empty)
$ arcstat -d WARNING https://arc1.farm.particle.cz:443/arex/rest/1.0/jobs/kIbLDmmWOr0nnoBGSqYX2MjntwGI2oABFKDm3YFKDmCBFKDmMvSFQo
Job: https://arc1.farm.particle.cz:443/arex/rest/1.0/jobs/kIbLDmmWOr0nnoBGSqYX2MjntwGI2oABFKDm3YFKDmCBFKDmMvSFQo
Name: arctest1
State: Queuing
Waiting Position: 151
Status of 1 jobs was queried, 1 jobs returned information
$ arckill -d WARNING https://arc1.farm.particle.cz:443/arex/rest/1.0/jobs/kIbLDmmWOr0nnoBGSqYX2MjntwGI2oABFKDm3YFKDmCBFKDmMvSFQo
Jobs processed: 1, successfully killed: 1, successfully cleaned: 0
$ arcclean -d WARNING https://arc1.farm.particle.cz:443/arex/rest/1.0/jobs/kIbLDmmWOr0nnoBGSqYX2MjntwGI2oABFKDm3YFKDmCBFKDmMvSFQo
Jobs processed: 1, deleted: 1
Accept jobs with tokens
WLCG tokens for testing
It can be tricky to test token configuration, because even if site administrator is part of specific VO they probably don't have privileges to (directly) obtain tokens with restricted
compute.*
scopes. In these situations you can use testing
WLCG IAM instance that allows users to get right tokens when they ask for membership in
/wlcg/pilots
group. Assuming your ARC-CE hostname is
arc1.example.com
you should add in
/etc/arc.conf
following configuration to support jobs submission with tokens:
[authtokens]
[authgroup: wlcg_iam]
# capability based authentication that use compute.* scopes
authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.example.com compute.create *
authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.example.com compute.read *
authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.example.com compute.modify *
authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.example.com compute.cancel *
# group based authentication that use /wlcg/pilots group
# (LHC experiments prefer capabilities and that's why this is commented out)
authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.example.com * /wlcg/pilots
# this assumes existence of local (posix) user and group wlcg
[mapping]
map_to_user = wlcg_iam wlcg:wlcg
policy_on_nomap=stop
[arex/ws/jobs]
allowaccess=wlcg_iam
# ...
ATLAS (optional)
For all our use-cases ARC-CE 6.x has limited
OIDC token support and we still rely on X.509
VOMS proxy even for job submission (2022 status). In case you would like to test job submission with tokens there are basic instructions in the
WLCG DOMA token testbed documentation. Most of ATLAS sites can currently ignore these configurations unless they are actively participating e.g. in WLCG
AuthZ working group.
Example of the ATLAS configuration with support for X.509
VOMS proxy (without Argus banning!) and ATLAS JWT tokens on ARC-CE with hostname
arc1.farm.particle.cz
:
[common]
# set hostname in case `hostname -f` does not return FQDN
#hostname=arc1.farm.particle.cz
x509_host_key=/etc/grid-security/hostkey.pem
x509_host_cert=/etc/grid-security/hostcert.pem
x509_cert_dir=/etc/grid-security/certificates
x509_voms_dir=/etc/grid-security/vomsdir
# http://www.nordugrid.org/documents/arc6/admins/details/auth_and_mapping.html
[authgroup: banana]
subject = /O=Grid/O=Bad Users/CN=The Worst
[authgroup: vo_atlasprd_group]
voms = atlas * production *
[authgroup: vo_atlasplt_group]
voms = atlas * pilot *
[authgroup: vo_atlassgm_group]
voms = atlas * lcgadmin *
[authgroup: vo_atlascz_group]
voms = atlas cz * *
[authgroup: vo_atlas_group]
voms = atlas * * *
[authgroup: vo_dteam_group]
voms = dteam * * *
#[authgroup: vo_wlcg_group]
#voms = wlcg * * *
[authgroup: vomsgroup]
authgroup = vo_atlasprd_group
authgroup = vo_atlasplt_group
authgroup = vo_atlassgm_group
authgroup = vo_atlascz_group
authgroup = vo_atlas_group
authgroup = vo_dteam_group
#authgroup = vo_wlcg_group
[authtokens]
[authgroup: iam_atlasprd_group]
authtokens = 7dee38a3-6ab8-4fe2-9e4c-58039c21d817 https://atlas-auth.cern.ch/ https://arc1.farm.particle.cz compute.create *
authtokens = 7dee38a3-6ab8-4fe2-9e4c-58039c21d817 https://atlas-auth.cern.ch/ https://arc1.farm.particle.cz compute.read *
authtokens = 7dee38a3-6ab8-4fe2-9e4c-58039c21d817 https://atlas-auth.cern.ch/ https://arc1.farm.particle.cz compute.modify *
authtokens = 7dee38a3-6ab8-4fe2-9e4c-58039c21d817 https://atlas-auth.cern.ch/ https://arc1.farm.particle.cz compute.cancel *
[authgroup: iam_atlasplt_group]
authtokens = 750e9609-485a-4ed4-bf16-d5cc46c71024 https://atlas-auth.cern.ch/ https://arc1.farm.particle.cz compute.create *
authtokens = 750e9609-485a-4ed4-bf16-d5cc46c71024 https://atlas-auth.cern.ch/ https://arc1.farm.particle.cz compute.read *
authtokens = 750e9609-485a-4ed4-bf16-d5cc46c71024 https://atlas-auth.cern.ch/ https://arc1.farm.particle.cz compute.modify *
authtokens = 750e9609-485a-4ed4-bf16-d5cc46c71024 https://atlas-auth.cern.ch/ https://arc1.farm.particle.cz compute.cancel *
[authgroup: iam_atlassgm_group]
authtokens = 5c5d2a4d-9177-3efa-912f-1b4e5c9fb660 https://atlas-auth.cern.ch/ https://arc1.farm.particle.cz compute.create *
authtokens = 5c5d2a4d-9177-3efa-912f-1b4e5c9fb660 https://atlas-auth.cern.ch/ https://arc1.farm.particle.cz compute.read *
authtokens = 5c5d2a4d-9177-3efa-912f-1b4e5c9fb660 https://atlas-auth.cern.ch/ https://arc1.farm.particle.cz compute.modify *
authtokens = 5c5d2a4d-9177-3efa-912f-1b4e5c9fb660 https://atlas-auth.cern.ch/ https://arc1.farm.particle.cz compute.cancel *
# Accept WLCG JWT tokens from DTeam IAM operated by CERN
[authgroup: iam_dteam_group]
authtokens = * https://dteam-auth.cern.ch/ https://arc1.farm.particle.cz compute.create *
authtokens = * https://dteam-auth.cern.ch/ https://arc1.farm.particle.cz compute.read *
authtokens = * https://dteam-auth.cern.ch/ https://arc1.farm.particle.cz compute.modify *
authtokens = * https://dteam-auth.cern.ch/ https://arc1.farm.particle.cz compute.cancel *
# Accept EGI Check-In token from individual user
#[authgroup: iam_egi_group]
#authtokens = 85ff127e07ea6660c727831b99e18e4e96b319283f8d2cc8113f405bad2ba261@egi.eu https://aai.egi.eu/auth/realms/egi * * *
# This instance is used by developers to tes WLCG JWT tokens
#[authgroup: iam_wlcg_group]
## scope based authorization
#authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.farm.particle.cz compute.create *
#authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.farm.particle.cz compute.read *
#authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.farm.particle.cz compute.modify *
#authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.farm.particle.cz compute.cancel *
## group based authorization
#authtokens = * https://wlcg.cloud.cnaf.infn.it/ https://arc1.farm.particle.cz * /wlcg/pilots
[authgroup: iamgroup]
authgroup = iam_atlasprd_group
authgroup = iam_atlasplt_group
authgroup = iam_atlassgm_group
authgroup = iam_dteam_group
#authgroup = iam_egi_group
#authgroup = iam_wlcg_group
# map ARC-CE identity to unix user:group
[mapping]
map_to_user = vo_atlasprd_group atlasprd001:atlasprd
map_to_user = vo_atlasplt_group atlasplt001:atlasplt
map_to_user = vo_atlassgm_group atlassgm001:atlassgm
map_to_user = vo_atlascz_group atlascz001:atlcz
map_to_user = vo_atlas_group atlas001:atlas
map_to_user = vo_dteam_group dteam001:dteam
#map_to_user = vo_wlcg_group wlcg001:wlcg
map_to_user = iam_atlasprd_group atlasprd001:atlasprd
map_to_user = iam_atlasplt_group atlasplt001:atlasplt
map_to_user = iam_atlassgm_group atlassgm001:atlassgm
map_to_user = iam_dteam_group dteam001:dteam
#map_to_user = iam_egi_group egi001:egi
#map_to_user = iam_wlcg_group wlcg001:wlcg
policy_on_nomap=stop
[arex]
# ... use your own preferred configuration ...
[arex/ws]
# ... use your own preferred configuration ...
[arex/ws/jobs]
#allownew=yes
denyaccess = banana
allowaccess=vomsgroup
allowaccess=iamgroup
# ... use your own preferred configuration for [infosys/*]
# ... use your own preferred configuration for [queue:*]
CMS
Already started to ask ARC-CE sites to configure tokens at the end of July 2022 (
GGUS tickets).
WLCG sites status
--
PetrVokac - 2022-03-23