Week of 100920

Daily WLCG Operations Call details

To join the call, at 15.00 CE(S)T Monday to Friday inclusive (in CERN 513 R-068) do one of the following:

  1. Dial +41227676000 (Main) and enter access code 0119168, or
  2. To have the system call you, click here
  3. The scod rota for the next few weeks is at ScodRota

WLCG Service Incidents, Interventions and Availability, Change / Risk Assessments

VO Summaries of Site Usability SIRs, Open Issues & Broadcasts Change assessments
ALICE ATLAS CMS LHCb WLCG Service Incident Reports WLCG Service Open Issues Broadcast archive CASTOR Change Assessments

General Information

General Information GGUS Information LHC Machine Information
CERN IT status board M/W PPSCoordinationWorkLog WLCG Baseline Versions WLCG Blogs   GgusInformation Sharepoint site - Cooldown Status - News


Monday:

Attendance: local(Jamie, Maria, Peter, Graeme, Eddie, Patricia, Luca, Roberto, MariaDZ, Carlos, Maarten, Giuseppe, Ueda, Nicolo, Ignacio, Dirk); remote(Gonzalo, Jon, Farida, Michael, Kyle, Rolf, John, Dimitri, Andreas, Ron, Andrea).

Experiments round table:

  • ATLAS reports -
    • ATLAS/T0
      • No stable beams over the weekend, no data exported.
    • Central Services
      • Problem with production 'bamboo' service noticed Saturday morning, causing flow of production jobs to dry up. Service had not restarted after Friday reboots. Quickly fixed and being dealt with by central services team.
    • T1s
      • Long standing network issue between RAL and NDGF (GGUS:61306) finally solved - linecard problem in RAL primary link into CERN LHCOPN router. Card replaced. This issue took 29 days to resolve!
      • INFN-T1, FZK, CC-IN2P3 batch queues down over the weekend. TRIUMF analysis queue also down.
      • File access problems at SARA, dCache pool offline (GGUS:62246). Fixed.
      • File access problems at RAL, disk server offline (GGUS:62243). Ongoing.
      • FR cloud will be set offline for IN2P3 downtime.
    • T2s
      • Very many T2s had their batch systems offline over the weekend. Clouds particularly affected were FR, DE, IT. Downtimes were only partly entered by many T2s and often not entered accurately (e.g., SRM marked down when it was up (and vice-versa); incomplete lists of CEs marked as down, etc.). Many T2 queues have been marked as offline based on not running pilots. Please ensure that ATLAS cloud contacts know when sites are back up so they can be out online again.
    • Grid
      • We estimate that over the whole grid we had lost ~50% of our capacity this weekend!
      • Highlight GOCDB issues in post-mortem of this event

  • CMS reports -
    • Experiment activity
      • Ready for stable beams on Tuesday
    • CERN
      • Known Kernel issue keeping everyone busy...
      • upgrade/reboot of all CMS VoBox's on-going, managed by CMS VOC.
      • Clean-up of Castor Pool CMSPRODLOGS
        • on-going (more details tomorrow)
      • Follow up on GGUS:61706 (CMS Job Robot failures at CERN)
      • Follow up of CMSR failure from Monday
        • After a long and careful investigation, CMS + CERN/IT DBA team now identified the source of the CMSR (Offline DB) that failed : it was a single host submitting a large "DBS" query every 2 seconds during 1.5h. Besides notifying/preventing the user to repeat such queries, most important is to make system more robust. While there exist no solution with the current DBS code, this will be fixed in the future "DBS3" version (foreseen for 2011).
    • Tier1 issues
      • 3 CMS Tier-1s are in downtime since Friday, partially due to known kernel issue, partially due to scheduled downtime
      • CMS Tier-1 production team avoided to send newly launched large 2010-Re-Reco workflows to IN2P3, CNAF and KIT
      • All reprocessing was (and, later, skimming will be) sent to ASGC, FNAL, PIC and RAL.
      • Note that RAL will also be in downtime during this reprocessing period (originally for firewall upgrade - [2010-09-21, 06:30:00 to 2010-09-21, 09:00:00], but will also include WN Kernel patch when available) however CMS decided to go ahead because we would otherwise need to add even more load to the FNAL queues. CMS will suspend submissions at RAL before the downtime commences and resume processing when the site comes back.
    • Tier2 Issues
      • ongoing production has been noticeably affected by the widespread downtimes. The (T2+T1) regions affected most so far are KIT, IN2P3 and RAL: according to dashboard, for those 3 regions, only 21.5k jobs were submitted since Friday (17.09 - 20.09) compared to 53.7k for the previous 3 days (14.09 - 17.09) which gives a very rough idea.
    • AOB

  • ALICE reports - Affected by security updates. Not only at sites but also central services. (Including server hosting this report!) During w/e transfers to IN2P3 and CNAF until draining started for upgrades. Killed and resubmitted agents to move to upgraded nodes. RAL will be out tomorrow 07:30 t0 10:00 for planned intervention - firewall will be updated. Tier2s: quite manual handling of this issue. New Alien version will handle this.

  • LHCb reports - Remaining merging productions + user analysis. Recovered some space. LHCb is very concerned on the status of their T1's as far as concerns the kernel patch to be applied, considering the data coming soon.
    • T0
    • T1 site issues:
      • RAL: A lot of merging jobs are stalling trying to access data stored in few disk servers that can't cope with the load. GGUS:62242 to request to throttle the number of concurrent jobs on RAL batch system.

Sites / Services round table:

  • PIC - ntr - currently deploying patch as rolling upgrade on WNs.
  • FNAL - ntr
  • BNL - ntr
  • RAL - issues covered above. Site downtime tomorrow 07:30 - 10:00 local time for firewall firmware upgrade.
  • NL-T1 - this morning saw one dCache problem on one of pool nodes - caused a number of files to be inaccessible. Fixed this morning. Currently in downtime for VObox ALICE to be migrated to new h/w
  • KIT - currently applying kernel patch to all WNs - expect to reopen Qs soon
  • CNAF - have already upgrade kernel using one provided by CERN - reopen site tomorrow. Storage ok - upgraded CMS SRM. Also upgraded LSF during this downtime - will not have another downtime that was scheduled end next week. Would like to raise a concern about lack of coordination by WLCG about this kernel problem. Every site is doing its own thing - at least some suggested guidelines to apply this patch.
  • IN2P3 - we will stay in downtime until scheduled downtime starts tomorrow - no point to restart as only short jobs could start. Deployment of suggested patch is manual on our site - waiting for official RH solution.
  • ASGC - kernel upgrade ongoing. Need 5-10' to reboot disk servers

  • CERN network - report about RAL-NDGF issue. Don't have more details. SIR requested.

AOB:

  • T1SCM will be held this Thursday at 15:30.

Tuesday:

Attendance: local(Lola, Nicolo, Peter, Roberto, Andrea V, Jamie, Maria, MaartenSimone, Ueda, Luca, Jan, Eddie, Harry, Farida, MariaDZ);remote(Michael, Jon, Gonzalo, Kyle, Dimitri, Xavier, Gareth, Ronald, Jeremy, Rolf, Andrea).

Experiments round table:

  • ATLAS reports -
    • ATLAS/T0
      • No stable beams, no data export except for calibration data to calibration sites.
    • T1s
      • INFN-T1, FZK, are set back online.
      • FR cloud is set offline for IN2P3 downtime. We hope the currently running jobs finish by the downtime.
      • TRIUMF analysis queue offline
      • File access problems at RAL, disk server offline (GGUS:62243). Fixed.
      • Can we hear about the plans/status for kernel patch at the T1s who have not reported yesterday, and summarize the overall situation?

  • CMS reports -
    • Experiment activity
      • Ready for data taking, planned for Wednesday, starting with HI test
    • CERN
      • Clean-up of Castor Pool CMSPRODLOGS
        • on-going today/tomorrow
      • Info about CMS Tier-0 :
        • switched to XROOTD : improvement since don't need to go through the lsf-based castor scheduler, hence saving a per-file overhead. So far so good !
      • Follow up on CMS Job Robot failures at CERN ("Maradona errors")
    • Tier1 issues
      • 2010 Re-Reco on-going at PIC, RAL, FNAL and ASGC
        • Other 3 Tier-1s that are coming back from downtime may be included as well
    • Tier2 Issues
    • MC production
      • Status of large 1000M production : 423M RAW events produced, available at T1s: 383M RAW events
    • AOB
      • Starting from tomorrow, new CMS Computing Run Coordinator reporting here will be Oliver Gutsche

  • ALICE reports -
    • In terms of raw data transfers there is no activity at this moment. All T1 SE systems are reporting well in ML
    • Large amount of agents reported today by the resource BDIIs of the CREAM systems at CERN. Our suspicious is that these agents might not be having any CPU consumption. Experts's opinion has been asked
    • Restart up of all services at CNAF this morning
    • RAL: Outage of the site this morning while the site firewall is updated
    • IN2P3-CC: Site still in schedule downtime (network maintenance operations)
    • FZK: CREAM system are being updated to SL5/CREAM1.6
    • Common operations for the T2 sites

  • LHCb reports - Awaiting for real data and commissioning the workflow for the new stripping. Launched several MC productions. User analysis
    • T0
      • none.
    • T1 site issues:
      • RAL: A lot of merging jobs are stalling trying to access data stored in few disk servers that can't cope with the load. GGUS:62242. Still an issue not running jobs as smoothly as we are use. Contact person should look at.

Sites / Services round table:

  • BNL - ntr Ueda - situation about kernel patch? A: we are waiting for official RH release and then will apply patch in rolling fashion. Will keep running at full capacity until then.
  • FNAL - ntr
  • PIC - ntr
  • KIT - after security incident opened batch queues yesterday. Reinstallation of compute farm still ongoing. Reinstalling LFCs for DE cloud. New machines - once ready will replace current machines.
  • RAL - Did have site outage declared for this morning for firewall. Went successfully. Problem with FTS - h/w fault, additional hour until FTS back. Patching etc: waiting for official RH release. Will do batch system in 2 halves. Drain one half patch etc. Other patching: once relevant patches from Oracle will need to do some of b/es - e.g. LFC and 3D - will need an outage.
  • NL-T1 - ntr
  • IN2P3 - start with work; announced downtime schedule maintained. Try to integrate kernel upgrade without extending downtime.
  • CNAF - we upgraded all kernels on CE WN and UI - reopened site. Have already long queues of jobs arriving - fully operational and nothing strange to report.
  • ASGC - we had offline all T1 and T2 WNs for kernel upgrade. WNs will be rebooted one by one after jobs finish. Capacity will be reduced. Will work on "AOB GGUS survey" after kernel patch done.
  • OSG - ntr
  • GridPP - ntr (problems unmuting the phone)

  • CERN - ntr

AOB: (MariaDZ) ASGC (see above) and NDGF please answer GGUS ALARM ticket handling survey as per https://savannah.cern.ch/support/?116430 This information is needed for the 2010/09/23 T1 Service Coordination meeting.

Wednesday

Attendance: local(Jan, Renato, Ueda, Gavin, Andrea, Oliver, Przemez, Eduardo, Patricia, Zsolt, Roberto, Jamie);remote(Michael, Andrea, Gonzalo, Jon, Tore, Onno, Rolf, John, Xavier, Kyle).

Experiments round table:

  • ATLAS reports -
    • LHC/ATLAS/T0
      • Expecting physics runs.
      • ATLAS has launched many MC production jobs to be done rather quickly.
    • T1-T1 network issues
      • INFN-BNL network problem (slow transfers) GGUS:61440 -- any news?
      • BNL-NDGF network problem (timeouts) GGUS:61942 -- the ticket is assigned to NDGF-T1, response from the site on 2010-09-09, but no news since then. Could BNL and OPN people also look into this?
    • T1s
      • FR cloud is set back online, but not much activity yet waiting for the end of the SRM downtime at IN2P3-CC. Analysis jobs are running.
      • question to T1s: Do any T1s have plans to apply the official kernel patch? [see in site reports]

  • CMS reports -
    • Experiment activity
      • Ready for data taking
      • HI pre-tests in Cosmics mode showed higher Castor input/output load as expected
    • CERN
      • reboot of various VO boxes
      • Clean-up of Castor Pool CMSPRODLOGS
      • on-going today/tomorrow
    • Info about CMS Tier-0 :
      • switched to XROOTD : improvement since don't need to go through the lsf-based castor scheduler, hence saving a per-file overhead. So far so good !
    • Follow up on CMS Job Robot failures at CERN ("Maradona errors")
    • Tier1 issues
      • various workflows from data rereco, MC redigi/rereco and full MC production
      • expect patch release to be installed today to be used
      • only T1 sites remaining in downtine: IN2P3
    • Tier2 Issues
      • MC production
        • Status of large 1000M production : 431M RAW events produced, available at T1s: 390M RAW events

  • ALICE reports - PRODUCTION STATUS: Large Pb-Pb MC production + reconstruction activities. Restart up of the T0-T1 raw data transfers (with IN2P3-CC and CNAF). Good behavior of all T1 SE services (no issues reported by MonaLisa)
    • T0 site
      • GGUS: 62286: confusing information reported today by all theb CREAM services. (BDII status: Production and simultaneouly queues do not accept any submission requirement) - badly affecting reconstruction activity.
    • T1 sites
      • GGUS: 62288 (INFN-T1). One of the CREAM services is timing out at submission time
    • T2 sites
      • Checking the T2 sites which have already applied the security patches and eventually appyling manual operations to put them back in production

  • LHCb reports - Awaiting for real data and commissioning the workflow for the new stripping.
    • T0
      • CERN: CASTOR piquet called on Sunday night due to backlog formed on the LSF due to a disk on the default pool being not working; ~600 request from a single LHCb user have been killed rebooting the node. SIR available at https://twiki.cern.ch/twiki/bin/view/CASTORService/IncidentsDiskserverLSF20Sep2010
      • Q (Jan): what was the impact on LHCb's activities? Roberto: very low - other pools were affected as well, but no major activities going on at that time.
    • T1 site issues:
      • RAL: The failures reported in the last days seem to be due to merging jobs / workflow. Other job types (including user jobs) run fine when the merging workflow spikes do not hang up the system. Requested to throttle DIRAC side the amount of merging jobs at RAL to protect the whole site. Let's wait for CASTOR 2.1.9 and the optimization coming with internal gridftp and xrootd.
      • CNAF: since last weekend the activity from CNAF,NIKHEF and RAL has saturated the sessions limit on CNAF LHCB CONDDB. DBA at CNAF have modified the DB parameters and scheduled a quick restart of the services (mandatory) to have the new configuration online. The intervention took place on the LCG 3D system, at 3pm CEST yesterday

Sites / Services round table:

  • BNL:
    • Kernel patch. Will apply official RH kernel in rolling manner (1/3 of job capacity at a time). 3600 slots should always be available for ATLAS, set to give the production the largest share. Expect to be completed drain/reboot by Friday.
    • Network problems (BN -> NDGF and INFN) were discussed at ADC meeting, all ionfo now provided to OPN group, waiting for coordination from that group. Eduardo: confirm, people are working on the BNL-NDGF problem, to check for the BNL-INFN one.

  • INFN-T1: Alice CREAM issue - service was restarted. Similar issue noted with LHCb and problem accessing software area. At CERN patched kernel.

  • PIC: NTR

  • FNAL: NTR

  • IN2P3: Still in maintenance downtime, going as planned. Expected to be back tomorrow morning. Have applied RH patch for security problem.

  • NDGF: NTR

  • NL-T1: NTR. Plan to stay at CERN patched kernel for now.

  • RAL: NTR. Planning to upgrade to RH kernel on a rolling basis.

  • KIT: NTR. Use CERN patch, Aim to move to RH one on rolling basis.

  • OSG: Machines which are exposed have been patched with RH official kernel. Continuing on internal machines.

Central services

  • Castor: 5 minute downtime with scheduled switch intervention. Another affecting Alice has been scheduled.

  • Batch. At CERN patched kernel. No immediate plans to move off it. Will slowly migrate to official kernels as new patches appear.

  • DB: NTR

  • Network: NTR

AOB:

  • Andrea INFN: Q. Will sites be forced to update top the vanilla RH kernel after already applying the CERN patched one? Jamie: T1 coordination meeting tomorrow will discuss this.

  • Maria: NDGF - please answer on alarm ticket handling for T1 coord. meeting tomorrow.

Thursday

Attendance: local(Steve, Ueda, Maria, Jamie, Jacek, Harry, Patricia, ,Nicolo, Lola, Oliver, Roberto, Zsolt, Maarten, MariaDZ, Stephane);remote(Andrea, Michael, Jon, Kyle, John, Ronald, Tore, Xavier, Eter, Gonzalo).

Experiments round table:

  • ATLAS reports -
    • LHC/ATLAS
      • Physics runs.
      • A small scale but rather urgent reprocessing ongoing.
      • A set of MC production jobs to be done rather quickly.
    • T0/ATLAS central services
      • Production monitoring system got stuck 16:40 - 17:40.
        • The service came back by itself while the experts are investigating.
        • Two problematic blocking sessions on ATLR DB identified and killed by ATLAS+IT DBAs.
        • There is nothing special in these queries (doing the same since long time) according to the experts.
    • T1-T1 network issues
      • INFN-BNL network problem (slow transfers) GGUS:61440. [ Andrea - people are working on this but should give some update in ticket ]
      • BNL-NDGF network problem (timeouts/killed) GGUS:61942.
      • We submitted a ticket to test/understand how to treat such tickets. GGUS:62368
        • the first step aim is to involve OPN people without waiting for weeks.
        • we found a support unit 'NetworkOperations' on ggus, and trying with this. -- Is this a proper one?
    • T1s
      • IN2P3-CC set back online. Production and Analysis jobs started running
      • INFN-T1 -- transfer errors to the site GGUS:62334 -- no response, but the errors have disappeared.

  • CMS reports -
    • Experiment activity
      • Data taking, HeavyIon test was successful
    • CERN - no issues
    • Tier1 issues
      • various workflows from data rereco, MC redigi/rereco and full MC production
      • IN2P3 comes out of downtime today(?)
      • FTS settings for CERN-FNAL in the CERN FTS:
        • it should be: VO 'cms' share is: 100 and is limited to 284 transfers
        • but we noticed yesterday that it was: VO 'cms' share is: 100 and is limited to 100 transfers
        • something resets the transfers limit, maybe reboots, we set it again to 284
        • Can this be fixed so that this is not reset again? [ Steve - yes, please open ticket for this ]
    • Tier2 Issues
    • MC production
      • Status of large 1000M production : 465M RAW events produced, available at T1s: 416M RAW events

  • ALICE reports - PRODUCTION STATUS: Pass1 + calibration activities ongoing together with several MC (7 Pb-Pb) cycles
    • T0 site
      • Problem reported yesterday through GGUS: 62286. SOLVED
      • Large number of agents running this morning with apparently zero CPU consume. Asking the experts about the status of these agents
      • Network operation performed yesterday afternoon. This (1h) operation affected 13 CASTOR ALICE diskservers (11 in alicedisk and 2 in t0alice). No problems reported by Alice after the operation
    • T1 sites
      • All T1 sites (IN2P3-CC under operations) in production. Ticket submitted yesterday concerning CNAF: SOLVED
    • T2 sites
      • Usual and daily operations with no remarkable news

  • LHCb reports - Awaiting for real data and commissioning the workflow for the new stripping.
    • T0 - none
    • T1 site issues:
      • IN2p3: back from the downtime re-enabled in the production mask. Problem with s/w area to install software (quota issue) GGUS:62379

Sites / Services round table:

  • CNAF - ntr
  • BNL - currently having maintenance on ns component of SE - causes a few transfers to fail but otherwise transparent
  • FNAL - tape system being upgraded today so tape transfers will be queued or put to pending
  • RAL - ntr
  • NL-T1 - 2 downtimes: 1 tomorrow morning at SARA - 1 of pool nodes in maint for 1 h; Monday morning NIKHEF will have at risk for all morning. At moment NIKHEF doing a rolling kernel thing so reduced capacity but otherwise transparent
  • NDGF - ntr
  • KIT - ntr
  • PIC - ntr

  • OSG - BNL is sending stale data to our BDII which in turn is being sent to CERN BDII. Opened ticket - restarting tomcat should fix.

  • CERN(Steve): Problems with CE published information, night of 21st -> 22nd , incident report: IncidentCE220910
  • ASGC(Farida): Nothing to report

AOB:

Friday

Attendance: local(Jacek, Gavin, Oliver, Andrea, Jamie, Maarten, Maria, Ueda, Harry, Eddie, Farida, Lola, Patricia, Miguel);remote(Jon, Xavier, Michel, Joel, Gareth, Kyle, Onno, G. Misurelli).

Experiments round table:

  • ATLAS reports -
    • LHC/ATLAS activities
      • Physics runs.
      • A small scale but rather urgent reprocessing ongoing.
      • A set of MC production jobs to be done rather quickly.
    • T0
      • we found 3 files missing on castor GGUS:62396 - under investigation (1 file seems to be 'unlinked' by atlas?) [ Miguel - lost flles were deleted by ATLAS and ticket closed. Ueda-same files? Will update ticket ]
    • T1-T1 network issues
      • INFN-BNL network problem (slow transfers) GGUS:61440.
      • BNL-NDGF network problem (timeouts/killed) GGUS:62287.
        • The ticket is assigned to NDGF-T1.
        • What is the proper way to ask involvement from BNL? [ Michael - we had good discussion yesterday at SC meeting. Should follow up and take this as the first case. Would like to ask WLCG to take lead to get a designated leader from experiment or involved site on this. ] [ Maarten will open feature request ticket for this ]
    • T1s
      • FZK-LCG2 LFC Down GGUS:62397 ALARM
        • 2010-09-24 10:19 assigned
        • 2010-09-24 11:11 solved
      • IN2P3-CC Storage problem GGUS:62394
      • Taiwan-LCG2 -- cloud set offline (power cut).

  • CMS reports -
    • Experiment activity
      • Data taking
    • CERN
      • SAM tests had been interrupted from Midnight to midday today
      • The password of the account the cron job was running under expired
      • Fixed now.
    • Tier1 issues
      • various workflows from data rereco, MC redigi/rereco and full MC production
      • IN2P3 not stable:
      • FTS settings for CERN-FNAL in the CERN FTS:
        • GGUS ticket 62401 submitted to get the defaults corrected
      • Tier2 Issues
      • MC production
        • Status of large 1200M production : 466M RAW events produced, available at T1s: 434M RAW events

  • ALICE reports - PRODUCTION STATUS: Continuing the pass1 + calibration activities ongoing together with several MC ( 7 Pb-Pb) cycles. Raw data transfers activities ongoing (peaks over 130MB/s in the last 24h). Good behavior of all T1 SE systems
    • T0 site
      • Good behavior of the T0 services
    • T1 sites
      • CCIN2P3: Still a too low number of concurrent jobs at this site. Reported to the Alice responsible at the site he has confirmed that after restarting the services, BQS wasn't completely brought back and one important daemon was missed. It has been already started this morning and the number of jobs should ramp up in the following hours
    • T2 sites
      • Setting up new services at some sites (LLNL BDII, CREAM and VOBOXES)

  • LHCb reports - The Reco06-Stripping10 reconstruction production for the FULL stream in Magnet Up and the associate merging productions. Analysis, no particular issues
    • T0 - none
    • T1 site issues:
      • CNAF: issue with CREAMCE (GGUS:62355). What is the status?
      • IN2p3: Huge backlog of merging jobs has to run at IN2p3. The quota issue has been addressed but other issues with shared area are there (GGUS:62379)
      • RAL: Long tail of merging jobs to run at RAL. Now draining their queues.
      • PIC: Issue with CREAMCE not reported the 22nd. Now fixed (GGUS:62357)

Sites / Services round table:

  • FNAL - ntr
  • KIT - ntr
  • BNL - ntr; comment on kernel upgrades, all patched and whole farm available to ATLAS again
  • RAL - we will be upgrading LHCb castor instance mon-wed next week. Will start draining out remaining batch soon.
  • NL-T1 - yesterday at NIKHEF an ALICE user used up all memory on WNs. NIKHEF all ALICE jobs and ALICE have blocked user responsible.
  • INFN-T1 - status of LHCb GGUS ticket: as mentioned in ticket experiencing high load - put a downtime to increase RAM in machine.
  • NDGF - ntr

  • OSG - ntr; helped BNL fix BDII problems from yesterday

  • ASGC: Due to the problem of power generator maintenance, all ASGC services were shutdown by unexpected power cut around 9:10 UTC .
All T1 and T2 service are affected and unavailable at the moment. Now power system is restored, we are recovering network and Grid services. TW cloud was put offline. We have put unschduled downtime from 9:30 till 14:30 UTC. We are trying our best to shorten service recovery time. We will try that this will not happen again in future.

  • CERN , AFS gliteUI First announcement: On Tuesday 28th @ 10:00 CEST the new_3.2 link will be swang from 3.2.6-0 to 3.2.8.0. Report problems via GGUS or ui.support@cernNOSPAMPLEASE.ch. Typically current_3.2 and sl5 will be updated in two weeks but another announcement will be made. For reference today's situation is as follows.
    Link Points To
    previous_3.2 3.2.1-0
    sl5 3.2.6-0
    current_3.2 3.2.6.0
    new_3.2 3.2.6-0

AOB:

-- JamieShiers - 20-Sep-2010

Edit | Attach | Watch | Print version | History: r19 < r18 < r17 < r16 < r15 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r19 - 2010-09-24 - MariaGirone
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback