Week of 180507

WLCG Operations Call details

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Portal
  • Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-scod@cernSPAMNOTNOSPAMPLEASE.ch to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.

Best practices for scheduled downtimes

Monday

Attendance:

  • local: Julia (WLCG), Kate (WLCG, DB), Maarten (WLCG, ALICE), Borja (monit), Alberto (monit), Gavin (comp), Roberto (storage)
  • remote: Ville S (NDGF), Onno (NL-T1), Xavier (KIT), Tommaso (CMS), Dave (FNAL), Marcelo (CNAF), Balazs (MW), Xin (BNL), Sabine (ATLAS), Di (TRIUMF), Pepe (PIC)

Experiments round table:

  • ATLAS reports ( raw view) -
    • Overall smooth running
      • for jobs: above 300k running jobs slots
        • relatively low level of analysis, production running smoothly
      • and for data management
        • transfer rate for jobs was increased during the week and reached almost 2,3 PB a day, rroll back on Friday to normal level to ensure good performance for T0 data export
    • 2 important issues during the week
      • ATLAS eos crashed on Friday morning due to high load
        • caused T0 export to go down
        • high load source understood, alternative being worked on
      • voms-proxy operational troubles during the week (proxy was not renewed)
        • interfere with panda on Wednesday May 2: all analysis queues blacklisted
        • interfere with rucio this weekend: all deletion and transfers stopped on Sunday morning
        • origin of problems comes from the deployment by CERN-IT of V3 voms package supplied by JAVA from EPEL
        • problem fixed rolling back to UMD version of voms-proxy
Gavin commented that CERN is investigating proxy issues. Maarten remarked that EPEL packages were renamed and their post-installation action list is non-trivial. Apparently upgrade was not fully tested. New version should be now in production repo. Issues were previously fixed by reinstalling RPMs and the fix should be permanent after the renaming.

  • CMS reports ( raw view) -
    • a very productive week: resources for production used at 100%, Tier-0 processing incoming data
    • very few notable issues:
    • Upgrade to singularity (critical security issue):
      • as of Friday, only 22 sites left; checking again today at the facilities meeting; should be nearly done (as of this morning, 8 tickets still open)
Maarten remarked that EGI sites have 1 week to react in case of a critical vulnerability. So sites should have reacted by today even without experiment involvement. It's a standard procedure but a statement was accidentally omitted from the advisory. EGI will be opening site tickets in case probes discover Singularity older than 2.5.0

  • ALICE -
    • NTR

Sites / Services round table:

  • ASGC: nc
  • BNL:NTR
  • CNAF: Technical problem at Tape Library. It is possible to write to disk but not read and no operation possible in Tape. Downtime declared. GOCDB:25293
  • EGI: nc
  • FNAL: NTR
  • IN2P3: nc
  • JINR: NTR
  • KISTI: nc
  • KIT:
    • We've learned that the dCache pools have to be IPv4/IPv6 dual-stack, too, otherwise transfers will get proxied by the doors under certain circumstances. Because of that, GridKa has to rethink the deployment strategy for IPv6 from the beginning once more. That naturally takes some time, so we have upgraded the current door nodes for CMS with a double 10 GE interface.
      • Some discussion on whether this behavior could be considered a bug and/or whether a fix could be found; to be followed up with the devs
  • NDGF: NTR
  • NL-T1: NTR
  • NRC-KI: nc
  • OSG: nc
  • PIC: NTR
  • RAL: nc
  • TRIUMF: NTR

  • CERN computing services: NTR
  • CERN storage services: NTR
  • CERN databases:
    • NTR
  • GGUS: NTR
  • Monitoring:
    • Draft reports for April availability sent around
  • MW Officer: xrootd 4.8.3 is available in EPEL testing.
  • Networks: NTR
  • Security: Please update Singularity to 2.5.0

AOB:

Edit | Attach | Watch | Print version | History: r19 < r18 < r17 < r16 < r15 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r19 - 2018-05-07 - MaartenLitmaath
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback