Week of 181210
WLCG Operations Call details
- For remote participation we use the Vidyo system. Instructions can be found here.
General Information
- The purpose of the meeting is:
- to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
- to announce or schedule interventions at Tier-1 sites;
- to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
- to provide important news about the middleware;
- to communicate any other information considered interesting for WLCG operations.
- The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
- The SCOD rota for the next few weeks is at ScodRota
- General information about the WLCG Service can be accessed from the Operations Portal
- Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-scod@cernSPAMNOTNOSPAMPLEASE.ch to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.
Best practices for scheduled downtimes
Monday
Attendance:
- local: Alberto (Monitoring), Borja (Chair, Monitoring), Gavin (Computing), Julia (WLCG), Maarten (ALICE), Raja (LHCb), Vincent (Security)
- remote: Andrew (NL-T1), Christoph (CMS), Di (TRIUMF), Dave (FNAL), John (RAL), Miroslav (DB), Sabine (ATLAS)
Experiments round table:
- ATLAS reports ( raw view) -
- Smooth running after end of 2018 data-taking.
- Production benefits from CERN-P1 and T0 and uses currently with analysis more than 400k slots
- Low level of errors, few due to Arc Control Tower-Panda communication instabilities
- Transfer rate is stable at 15-20GB/s
- Lot of data disk full and big deletion campaign ongoing (400-700 files/h)
- CMS reports ( raw view) -
- CMS preparing for Xmas holidays
- Archiving and Reco of HI data will continue until January
- Expect a lot of MC production during Xmas holidays
- Various campaigns are being validated
- ALICE -
- Normal to high activity levels on average last week
- CERN: CASTOR overloaded by reconstruction jobs during the weekend
- ALICE is waiting for the service to be reconfigured for that usage
Pointed out by Maarten current configuration is oriented to data writing but reading is fairly slow, in the near future idea will be to serve this data directly from EOS, but it has to be migrated.
- LHCb reports ( raw view) -
- Activity
- Data reconstruction for 2018 data
- User and MC jobs
- Staging data for reprocessing in 2019
- Site Issues
Asked about the progress on the tickets, seems many of them don't have any update for 4-5 days. Raja says he is going to look into them to see whether they are waiting for a CMS person to answer of we need to bring them to the attention of someone else.
Sites / Services round table:
- ASGC: NC
- BNL: NC
- CNAF: NC
- EGI: NC
- FNAL: NTR
- IN2P3: NTR
- JINR: LAN problem resolved.
- KISTI: NC
- KIT:
- Downtime for atlassrm-fzk.gridka.de tomorrow in order to update dCache + PostgreSQL + GPFS and to roll out IPv6 as well as high availability for the SRM endpoint.
- NDGF: NC
- NL-T1:
- NRC-KI: NC
- OSG: NC
- PIC: NC
- RAL: NTR
- TRIUMF: NTR
- CERN computing services: NTR
- CERN storage services: NTR
- CERN databases: NTR
- GGUS: NTR
- Monitoring:
- Draft reports for the November availability sent around
- SAM3 computation problems for CMS solved (GGUS:138486)
- Ongoing investigation with SSB squid for ATLAS (RQF:1179141)
- MW Officer: NC
- Networks: LHCb: GGUS:138472 was followed up and IPv6 MTU issue was fixed, please check and comment in the ticket if things still don't work as expected
- Security: NTR
AOB: