Week of 240226

WLCG Operations Call details

  • The connection details for remote participation are provided on this agenda page.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to the wlcg-scod list (at cern.ch) to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.

Best practices for scheduled downtimes

Monday

Attendance:

  • local: -
  • remote: Andrew (TRIUMF), Borja (Chair, Monitoring), Brian (RAL), Carmen (CNAF), Christoph (CMS), Concezio (LHCb), Darren (RAL), Dave (FNAL), Giacomo (Computing), Jens (NDGF), Julia (WLCG), Maarten (ALICE), Michal (ATLAS), Ville (NDGF), Xavier (KIT)

Experiments round table:

  • ATLAS reports ( raw view) -
    • Activities:
      • DC24 finished. To plan for a new ATLAS CERN - T1s DC.
    • Issues:
      • ATLAS FTS was overwhelmed by too many concurrent submitted transfers (DC24)
      • 6k files lost at `SARA-MATRIX` tape - replicas declared as bad by DDM Ops
      • "Archive polling call failed: curl error (6): Couldn't resolve host name" transfers failures from CERN-PROD_TZDISK to CERN-PROD_RAW (GGUS:165400)
      • "copy timeout of 601s" transfer failures to FZK-LCG2 (GGUS:165393)
        • multiple transfer limits in the dCache increased
      • "Server Error" transfer failures from TRIUMF-LCG2 (GGUS:165364)
        • caused by high load + limited slot of movers
      • Large number of stage files remain PINNED in tape buffer after copied to other sites at TRIUMF-LCG2 (GGUS:165404)

  • CMS reports ( raw view) -
    • Main focus last week was on DC24
      • In the end rather successful concerning target rates
      • Quite some details to look at for the aftermath
      • No obvious impact on ongoing production or user activity

  • ALICE
    • High activity levels on average.
    • No major issues.
    • Raw data replication to the tape SEs at T0 and T1 sites has run without interference from DC24.
      • It is expected to finish early March.
      • Jens: Is there any place where the status can be tracked?
      • Maarten: In the link you can get an overview of the ongoing rate; is this interfering with your plans, e.g. downtime for maintenance?
      • Jens: No, more general curiosity

Sites / Services round table:

  • BNL: NTR
  • CNAF: NTR
  • EGI: NC
  • FNAL: NTR
  • IN2P3: NC
  • JINR: NTR
  • KISTI: NC
  • KIT:
    • Network issues affecting only CMS on Tuesday (GOCDB:35058).
    • GGUS:165393: ATLAS hit concurrency limitation for third-party-copy transfers. Therefore we raised the caps and needed a brief downtime (GOCDB:35070) to restart all dCache pools.
  • NCBJ: NC
  • NDGF: Some progress on the severely broken FTS 3rd party transfers to NDGF. As it seems, the thing killing the transfers was the haproxy in front of the HTTPS service. We had a 50s timeout for idle connections. This was not an issue in dCache 8.2, but after the upgrade all transfers were killed after 50 seconds (or more, why is not understood). The reason for this seems to be missing "transfer markers" from dCache. In 8.2 there is a message send from dCache every five seconds, this is missing in 9.2. The dCache team is investigating this.
  • NL-T1: NTR
  • NRC-KI: NC
  • OSG: NC
  • PIC: NC
  • RAL:
    • DC24, Tuned storage GWs multiple network/OS settings. What do other sites have for ECN ( Explicit Congestion notification)? LHCOPN connection repaired during DC24.
    • Maarten: Better to raise that question is the specific forum for lhcone/lhcopn
  • TRIUMF: NTR

  • CERN computing services:
  • CERN storage services: NC
  • CERN databases: NC
  • GGUS:
    • A new release is planned for Wed this week
      • Release notes
      • A downtime has been scheduled from 07:00 to 09:00 UTC
      • Test alarms should be submitted as usual
  • Monitoring: NTR
  • Middleware: NTR
  • Networks: NC
  • Security: NC

AOB:

Edit | Attach | Watch | Print version | History: r18 < r17 < r16 < r15 < r14 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r18 - 2024-02-27 - NikolayVoytishin
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback