--
JamieShiers - 25 Jul 2006
Actions preparing for Q3/Q4 (CMS CSA06 etc.)
ATLAS Actions
ATLAS Sites
Request Date |
Due Date |
Action |
Requestor |
Target |
Responsible |
Contact |
More Info |
Status |
27 June |
31 August |
Monitoring of LFC services |
Miguel Branco |
ATLAS T1s |
Miguel Branco |
Miguel Branco [miguel.branco@cern.ch] |
See this page for information on LFC monitoring at CERN |
Pending - this should be a generic service requirement, independent of any VO (10 July) LFC is a site critical service for ATLAS - if it is done, site is effectively down (17 July). ATLAS is contacting sites directly where LFC service issues have been seen (19 July) |
CMS Actions
CMS Sites
Request Date |
Due Date |
Action |
Requestor |
Target |
Responsible |
Contact |
More Info |
Status |
27 June |
? |
improved performance and reliability of file transfers |
Michael Ernst |
T0+CMS T1s |
Michael Ernst + WLCG Service Coordination Team |
Michael.Ernst@cernNOSPAMPLEASE.ch, it-dep-gd-sc@cernNOSPAMPLEASE.ch |
Daily check-point plus follow-up at CMS integration task-force.. more info |
Closed |
10 July |
- |
Performance of transfers into CERN |
CERN |
Michael Ernst |
Michael.Ernst@cernNOSPAMPLEASE.ch |
Transfers from Tier-1/2 to Castor at CERN are very slow and time out frequently (Maarten) |
See SC Tech mail |
Understood - closed |
10 July |
- |
FTS channel architecture clarifications |
FTS team |
Michael Ernst |
Michael.Ernst@cernNOSPAMPLEASE.ch |
- |
Define FTS channel architecture for Tier-1 <=> Tier-2 interconnects (James, Gavin, Paolo, Ian, Michael) |
July 24 - meeting was held July 18 to address this. See July 19 CMS Task Force meeting |
10 July |
- |
FTS management issues |
FTS team |
Michael Ernst |
Michael.Ernst@cernNOSPAMPLEASE.ch |
- |
Tools to monitor transfer activities on FTS channels; provide access to FTS logs (Gavin, Paolo) |
Temporary solution being tested |
27 June |
by July |
3D infrastructure |
Michael Ernst |
T0+CMS T1s |
Michael Ernst, Dirk Duellmann |
Michael.Ernst@cernNOSPAMPLEASE.ch, Dirk.Duellmann@cernNOSPAMPLEASE.ch |
Involves SQUID deployment at T1 and T2 sites |
July 24 - required infrastructure in place at T0, T1s and main T2s. To be followed... see CMS database access wiki for more info |
27 June |
? |
sites to complete their CSA06 metrics |
Ian Fisk, Michael Ernst |
Participating CMS sites |
Ian Fisk, Michael Ernst |
ifisk@fnalNOSPAMPLEASE.gov, Michael.Ernst@cernNOSPAMPLEASE.ch |
CSA meeting, June 23, CSA06 Wiki |
Pending - more details can be found in July 19 CMS Task Force meeting |
- CMS CSA06 Resource requirements:
- Tier0 - 1200CPUs and 180TB
- Tier1s - minimum of 150CPU and 70TB per participating site; total of 1500CPUs and up to 200TB/site
- Tier2s - minimum of 20CPU and 5TB per participating site; total of 2500CPUs and up to 25TB/site
ALICE Actions
ALICE Sites
- T0(CERN); T1s (IN2P3, GridKA, CNAF, SARA? RAL? NDGF?); T2s (Torino, Legnaro, Bari, Cagliari, Catania (CNAF), Subatech, Clermont(IN2P3), GSI, SPbSU, PNPI (St.Petersburg), ITEP, KI, JINR (GridKA))
LHCb Actions
ROOT/POOL data access to SE problem
The problem is with the client library of dcache that checks only the first 56 CAs. If your are unlucky (your CA is beyond these first 56 CAs) any interaction with the dcache server is not authenticated and fails. This problem has been experienced at
IN2P3 and NIKHEF. LHCb want to have assured everywhere the possibility of accessing data directly from SE (without copying them to local disk on the WN)
Update 3 July - a fix for this is scheduled for gLite 3.0.2
Update 19 July - As
IN2P3 moved to gsidcap that is not supported by ROOT until recently, LHCb will use the disk endpoint there temporarily
Update 25 July - This is not supported by ROOT in the AA until next release. LHCb will need to re-build our applications.
Although not advertised the Lyon disk SE does support insecure dcap. Our "hack" to use this failed - currently under investigation by LHCb