Task Force (TF) workshop week of 2005-10-17
Executive summary:
At the workshop we did:
- installation/configuration/re-structuring
of the CERN VOMS servers
- a meeting with VO managers to discuss the outcome of their pilot testing
- meetings with database and system administration experts and the LCG management
to discuss improvements and remaining problems.
The final
agenda at the end, contains the working sessions in detail.
The workshop was very useful because it led to:
- Rapid integration of 2 voms-admin bug fixes in the gLite Release 1.4. Direct
discussions between code developers, integrators and deployers.
- Complete re-installation of VOMS and VOMRS (all on Oracle) on the CERN
servers. Please see the new architecture in the
table of hosts and software versions of this page. In summary:
- lcg-voms.cern.ch contains only recently registered members, who went
through the native VOMRS interface.
- voms.cern.ch contains all the (LHC Experiments and dteam) VO members
who registered via lcg-registrar.cern.ch (some of them are in the VOs for
over
2 years already). We build this VOMS database via LDAP synchronisation,
i.e. listing the LDAP directory entries every few hours. This will go away
at some point next year.
- voms-slave.cern.ch is identical to lcg-voms.cern.ch, always up and running
but nobody is supposed to know its existence. If lcg-voms.cern.ch has a
hardware failure then the DNS alias 'lcg-voms' should be given to voms-slave
and the service will continue undisturbed.
- Move of voms site configuration to a remote
host (lxb2051.cern.ch). This centrally maintained configuration facilitates
moves to new hardware and replication on the present
hot-spare VOMS server (voms-slave.cern.ch).
- VOMRS installation and configuration by Tanya for 9 VOs on 2 hosts. Code
changes based on VO managers' requests. Here is up-to-date documentation and Tutorials for
VO managers and users .
By
the time these notes are published, the situation with our most important problem,
i.e. the tomcat performance on the primary VOMS server seems to improve. Details
and still important remaining problems in http://cern.ch/dimou/lcg/voms/StatusFall2005
Overall conclusion:
This Task Force was mandated after the March 8th
2004 GDB. We held regular checkpoint meetings and
made a detailed
plan for
the final count-down. We didn't have the VOMS product quality necessary to
migrate the experiments' user community so far.
Other technical documents:
Next workshop:
The week of March 13th 2006. The workshop is definitely necessary.
The date has to be confirmed.
Participants of the October workshop:
Tim Bell: Wednesday 19/10 14:30-16:00 hrs
Miguel Anjo: Tuesday 18/10 16:00-18:00
hrs and Wednesday 19/10 15:00-18:00 hrs.
Alberto di Meglio: Wednesday 19/10 10:00-11:30 hrs.
LHC Experiments' VO managers: Thursday 20/10 15:00-18:00 hrs.
Ian Bird: Friday 21/10 10:00-10:30 hrs.
Tanya,Maria, Karoly, Valerio and
Vincenzo will do the installations and the plan for the service deployment.
Ian Neilson
will
join
us for a couple
of hours at the beginning and the end of the week.
Joni, for some part of the week, on JRA3 issues and gLite release situation.
Absences: Jamie Shiers: He was invited to convey the importance
of VOMS for the World Wide LCG service on Tuesday
18/10
9:30-10:00
hrs.
Final agenda
Monday 2005-10-17:
Install and configure lcg-voms.cern.ch and voms.cern.ch with:
1.
voms oracle port as in glite R1.4.1 (I hope to obtain the package this week.
R.1.4.
is already
installed on host voms-oracle.cern.ch)
2. vomrs on the latest voms (all oracle, already installed on voms-oracle.cern.ch)
Tuesday 2005-10-18:
3. Meet the LCG service coordinators at 9:30am (Tim
Bell, Jamie Shiers).J.Shiers never came. T.Bell came and made very useful
suggestions on the hot-spare on Wednesday pm.
4. Meet Alberto di Meglio and Ian
Bird at 10:30am
to
discuss
VOMS
versions in
gLite releases and the migration plan.
5. Perform the mysql-oracle db migration (we don't have this script yet. Vincenzo?).
If we complete the R1.4.1. installation on a new machine, e.g. voms-oracle.cern.ch
and swap the DNS aliases at the last moment to avoid service interruption, how
do we move the user population to a different host with a different database
on
a separate backend?
6. Database backend is defined to be the centrally managed db pre-production
server grid8
.
Discuss
data integrity and replication.
(Miguel
Anjo Tuesday at 16hrs please join us for this item).
Wednesday 2005-10-19:
7. Prepare hot-spare host. Document the VOMS server switch procedure. (Miguel
Anjo Wednesday at 15hrs please join us for this item).
Thursday 2005-10-20:
8. Plan review.
9. Meet the VO managers. Those at CERN, please come
to room 28-R-006 at 15hrs CEST. Those remote please call +41 22 767 7000 and
ask to join the meeting:
Name: LCG Registration Task Force
Responsible: Maria Dimou
Date: Thursday 20/10 between 15 a 17hrs CEST
Friday 2005-10-21:
10.
Discuss how to let site managers/admins/security contacts get personal user data
(email). Sites are pressing on this requirement.
11. Discuss how VOs that should register via VOMRS should be prevented from
going to the https://voms-server:8443/voms/VOname page.
12.
A.O.B.
VOMS installations' situation before the workshop:
Hostnames and VOMS versions installed
at CERN until October 17th 2005 |
lcg-voms.cern.ch
|
voms.cern.ch
|
voms-oracle.cern.ch |
glite-voms.cern.ch
|
Runs
glite voms R1.4 since 20051013.
This is the official server. VODB will be built with new registrations
using VOMRS and ORGDB link. |
Runs
glite voms R1.3
VODB populated via ldap-sync. |
Runs
glite voms pre-release R1.4, status of 2005-09-16. VO=test configured in VOMRS
too. |
Runs
glite voms R1.3
This is a TEST machine not visible from outside CERN. |
Maria Dimou, IT/GD,
Grid Infrastructure Services