Week of 240219

WLCG Operations Call details

  • The connection details for remote participation are provided on this agenda page.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to the wlcg-scod list (at cern.ch) to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.

Best practices for scheduled downtimes

Monday

Attendance:

  • local: Borja (Monitoring, Chair), Willy (Monitoring)
  • remote: Alessandro (CNAF), Christian (NDGF), Concezio (LHCB), Darren (RAL), David (FNAL), Dennis (NL-T1), Henryk (NCBJ), Ignacio (Computing), Jens (NDGF), Julia (WLCG), Maarten (ALICE), Michal (ATLAS), Stephan (CMS), Xavier (KIT)

Experiments round table:

  • ATLAS reports ( raw view) -
    • Activities:
      • DC24
    • Issues:
      • "All pools are full" transfer failures to TRIUMF-LCG2_MCTAPE (GGUS:165324)
        • A large number of staging requests reserved the available space, no room for incoming data - a group of pools configured for data intake only
      • "SSL connect error" transfer failures to INFN-T1 (GGUS:165355)
      • "Transferred a partial file" transfer failures to NDGF-T1 (GGUS:164846)
      • slow deletions at RAL (GGUS:165358)
        • The deletion is functional (deleting about 20 TB/hour) but it seems it cannot keep up with the rate of writing. Looking at details, it seems average deletion duration at RAL is longer that at other sites (some attempts to delete files at RAL take even tens of seconds).
      • GOCDB downtimes for srm-less tape endpoint blacklist also disk RSEs
      • FTS ATLAS service degradation on Wednesday
        • Two major incidents: from 11:50 to 12:30 and from 15:30 to 19:30 (CET)
        • Rucio's ability to submit transfers was greatly reduced
        • related to manual cleaning of the FTS DB
      • Jobs at CERN-HPC fail with "Missing dependency : gfal2"
        • alma9: missing openldap-compat rpm
        • ask developers to make openldap dependencies optional?

  • ALICE
    • NTR

  • LHCb reports ( raw view) -
    • DC24 going well
      • running with storage tokens on some sites
      • T0-->T1 transfers and archival to tape ~OK; target reached on all sites for at least some time; analysis is ongoing
      • main issue: overload of IAM servers
      • ready to exercise tape staging this week
    • slow FTS transfers between NL-T1 and RAL (GGUS:165359)
      • network issue somewhere; RAL gateway received indication of network congestion from FTS agent, implying reduction in transfer rate; however, it this FTS feature is turned off, transfers rates go up.

Sites / Services round table:

  • BNL: NC
  • CNAF: NTR
  • EGI: NC
  • FNAL: NTR
  • IN2P3: NC
  • JINR: NTR
  • KISTI: NC
  • KIT:
    • Mechanical failure of the tape library on Thursday morning. Announced downtimes in GOCDB for CMS and LHCb (GOCDB:35040), later for ATLAS (GOCDB:35042), too. Was resolved in the evening by external technician and the downtimes were closed on Friday morning.
    • One of our links for LHCOPN went offline on Thursday, which degraded the maximum throughput significantly, since there was no full redundancy, which is under investigation.
  • NCBJ: NTR
  • NDGF:
    • Major issues with aborted (CANCELLED) third party transfers after the dCache 9.2 upgrade. Investigation ongoing but nothing conclusive yet. Problem exaggerated by frequent pool issues on some of the federated storage sites. (GGUS:164846)
    • Upcoming interventions:
      • Short interruption on Wednesday noon, as some experiments are done to pools at UIO. Only atlas data affected.(GOCDB:35046)
      • Whole workday Thursday the same pools need to be physically moved to a new location. Downtimes are in GocDB.(GOCDB:35047)
  • NL-T1: Nikhef Network outage due to overheating problems in core router. Chassis was replaced on Thu 15 Feb 2024 (GOCDB:35041)
  • NRC-KI: NC
  • OSG: NC
  • PIC: NC
  • RAL: NTR
  • TRIUMF: NTR (Canadian holiday today)

  • CERN computing services: NTR
  • CERN storage services: NC
  • CERN databases: NC
  • GGUS: NTR
  • Monitoring: NTR
  • Middleware: NTR
  • Networks: NC
  • Security: NC

AOB:

Edit | Attach | Watch | Print version | History: r17 < r16 < r15 < r14 < r13 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r17 - 2024-02-20 - NikolayVoytishin
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback