Week of 171120

WLCG Operations Call details

  • At CERN the meeting room is 513 R-068.

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Web
  • Whenever a particular topic needs to be discussed at the daily meeting requiring information from site or experiments, it is highly recommended to announce it by email to wlcg-operations@cernSPAMNOTNOSPAMPLEASE.ch to make sure that the relevant parties have the time to collect the required information or invite the right people at the meeting.

Tier-1 downtimes

Experiments may experience problems if two or more of their Tier-1 sites are inaccessible at the same time. Therefore Tier-1 sites should do their best to avoid scheduling a downtime classified as "outage" in a time slot overlapping with an "outage" downtime already declared by another Tier-1 site supporting the same VO(s). The following procedure is recommended:

  1. A Tier-1 should check the downtimes calendar to see if another Tier-1 has already an "outage" downtime in the desired time slot.
  2. If there is a conflict, another time slot should be chosen.
  3. In case stronger constraints cannot allow to choose another time slot, the Tier-1 will point out the existence of the conflict to the SCOD mailing list and at the next WLCG operations call, to discuss it with the representatives of the experiments involved and the other Tier-1.

As an additional precaution, the SCOD will check the downtimes calendar for Tier-1 "outage" downtime conflicts at least once during his/her shift, for the current and the following two weeks; in case a conflict is found, it will be discussed at the next operations call, or offline if at least one relevant experiment or site contact is absent.

Links to Tier-1 downtimes

ALICE ATLAS CMS LHCB
  BNL FNAL  

Monday

Attendance:

  • local: Kate (chair, WLCG, DB), Julia (WLCG), Maarten (ALICE, WLCG), Liviu (security), Vincent (security), Gavin (comp), Jesus (storage), Cedric (ATLAS), Alexey (LHCb), Alberto (monitoring), Andrea M (MW)
  • remote: Christoph W (CMS), Giuseppe (CMS), Marcelo (CNAF), Onno (NL-T1), Xavier (KIT), Victor (JINR), Xin Zhao (BNL), Di Qing (TRIUMF), Dave (FNAL), Kyle (OSG), John K (RAL), David B (IN2P3), Chi-Hsun (ASGC), Pepe (PIC), Vincenzo (EGI)

Experiments round table:

  • ATLAS reports ( raw view) -
    • Activities:
      • Started derivation campaign
      • Reprocessing campaign expected soonish
    • Problems
      • CNAF Incident : Replication of data17_13TeV RAW hosted at INFN-T1 from Castor (75% done). Full summary of ATLAS actions :
      • SFOs got full during the week-end
        • Problem related to unexpected increase data taking rate (3GB/s) vs "normal" rate (1.2 GB/s)
        • Files on SFOs are only deleted if they are migrated on Castor, but throughput EOS to Castor peaked at 2 GB/s after increase of FTS cap for EOS to Castor by FTS support (thanks !)
        • Situation looks better this morning
      • CERN wide DB problem yesterday night : No big impact on Rucio and Panda
      • CERN condor MCORE still not at expected scale - problem in the CERN batch partitioning ?

  • CMS reports ( raw view) -
    • ppRef run at 5TeV
      • Logging ZeroBias at high rates (~30kHz at beginning of fill)
      • Up to 5GB/s logging rate
      • Developing quite some backlogs in RAW export to T1s and archival on tape at CERN
      • Survived Oracle glitch on Sunday evening without major impact
    • Another week of high CPU utilization ~210k cores over last week
    • Transfer system remains under pressure
      • Increased queue depth in Phedex to 200TB trying to submit more transfers (particularly to FNAL)
      • Some problems to transfer large (~30GB) files
        • US region appears most troublesome due to older FTS service version: GGUS:131836
        • Transfer tend to run into timeout
    • Oracle database outage on Sunday evening
      • Relevant experts luckily available to restart affected services
        • Tier-0 system
        • Central production agents at CERN
    • Launched a disk cleaning campaign last week to free ~5PB
    • CNAF outage mitigations
      • Received lists of eventually affected files
      • Identified RAW data files will get another tape copy at CCIN2P3
        • Transfer requests (volume ~140TB) being injected today
      • Urgent GEN-SIM samples are being reproduced already (we don't have any other copy)

Maarten has asked for the timeline of FTS upgrade. Dave replied the work to be done will be assessed this and next week.

  • ALICE -
    • High to very high activity on average
    • CERN: ~all 1.2 M files lost from the EOS namespace were recovered, thanks!
      • Some calibration DB files were found having zero size. Being followed up.
    • CERN: EOS failures due to 1 link to Wigner being down Thu/Fri (OTG:0040930).
    • CERN: CASTOR incident Sun evening due to DB outage (OTG:0040966).

  • LHCb reports ( raw view) -
    • Activity
      • Stripping validation, user analysis, MC
    • Site Issues
      • NTR

Sites / Services round table:

  • ASGC: NTR
  • BNL: NTR
  • CNAF: Still hard to properly estimate the time CNAF will remain unavailable. The optimistic estimation is middle January to have the site at least partially up.
    • ~700 disks were under water
    • ~150 tapes were under water (~110 tapes are LHC experiments)
    • Being analyzed the possibility of data recovery
  • EGI: NTR
  • FNAL: Large backlog to tape. Things have started moving during the weekend after Phedex configuration changes.
  • IN2P3: Scheduled maintenance on next Tuesday 28th Nov. CEs and SEs will be in downtime for the whole day.
  • JINR: NTR
  • KISTI: nc
  • KIT: NTR
  • NDGF: NC
  • NL-T1: Last spring we had updated our dCache pool groups to match the pledges, but we had neglected to update the space tokens as well. This has been corrected for LHCb; now correcting it for Atlas. Thanks to Philippe Charpentier for reporting it!
  • NRC-KI: NC
  • OSG: NTR
  • PIC: NTR
  • RAL: There was a power outage on the RAL site and wider area today at about 12:03 local time. Many systems still offline. Power has been restored but not yet declared to be stable. We are working on recovering systems and further updates will be via broadcasts and the GOCDB.
  • TRIUMF: NTR

  • CERN computing services: NTR
  • CERN storage services: NTR
  • CERN databases: Multiple databases down or degraded due to a storage issue (OTG:0040966)
  • GGUS: NTR
  • Monitoring:
    • Final reports for the Oct 2017 availability sent around
  • MW Officer: NTR
  • Networks: NTR
  • Security: NTR

AOB:

  • We would like to announce that the first workshop of the WLCG Security Operations Centers working group will take place at CERN on the 11th (afternoon) and 12th (all day) of December 2017 (https://indico.cern.ch/event/676160/). The workshop format will be that of a hands-on hackathon with the aim of helping attendees with deployment of security tools like Bro and MISP at their local sites.
Edit | Attach | Watch | Print version | History: r19 < r18 < r17 < r16 < r15 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r19 - 2017-12-15 - MaartenLitmaath
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback