Week of 180326
WLCG Operations Call details
- For remote participation we use the Vidyo system. Instructions can be found here.
General Information
- The purpose of the meeting is:
- to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
- to announce or schedule interventions at Tier-1 sites;
- to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
- to provide important news about the middleware;
- to communicate any other information considered interesting for WLCG operations.
- The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
- The SCOD rota for the next few weeks is at ScodRota
- General information about the WLCG Service can be accessed from the Operations Portal
- Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-operations@cernSPAMNOTNOSPAMPLEASE.ch to make sure that the relevant parties have the time to collect the required information or invite the right people at the meeting.
Best practices for scheduled downtimes
Monday - virtual meeting
- You may provide relevant incidents, announcements etc. for the operations record.
Attendance:
Experiments round table:
Sites / Services round table:
- ASGC:
- BNL:
- CNAF:
- 2 patch at storm frontend and backend during the week (It has been already done for LHCb)
- EGI:
- FNAL:
- IN2P3: NTR
- JINR:
- KISTI:
- KIT:
- On Tuesday we applied some changes to the network, aiming to resolve connectivity issues with LHCb pilots on the GridKa farm. Instead of that however there were massive issues for internal and external networking throughout GridKa systems. We swiftly reverted the change, but the damage was done. The GPFS clusters for Alice, ATLAS and CMS had fallen apart and needed some time to recover. In the case of ATLAS and CMS all dCache pools had to be restarted, too.
- Possibly induced by before mentioned incident, one single GPFS NSD server dragged down the performance of the entire storage system, which again affected Alice, ATLAS and CMS starting on Tuesday. Though we did not realise that up until Thursday, when we excluded that node from the storage setup and immediately the performance improved dramatically. By now that server has been fixed and put back into service.
- ARC-6 has had trouble for most of last week and again since Sunday evening. The former problems are not fully understood, but in the latter case the local CRLs cache wasn't updated and therefore authentication issues were the logical consequence.
- NDGF:
- NL-T1: NTR
- NRC-KI:
- OSG:
- PIC:
- RAL: There is a Castor Downtime planned for tomorrow. It is necessary to patch the Oracle systems behind Castor.
- TRIUMF: NTR
- CERN computing services:
- CERN storage services:
- CERN databases:
- GGUS:
- A new release is planned for Wed this week
- Release notes
- A downtime has been scheduled for 06:00-09:00 UTC
- Test alarms will be submitted as usual
- Monitoring:
- MW Officer:
- Networks: NTR
- Security:
AOB:
- ATTENTION: next meeting on Tuesday April 3 !
- Have a good Easter break !