-- HarryRenshall - 06 Mar 2006

Last Updated 04.06.2007: Extend LHCb requirements to the end of 2007.

Updated 31.05.2007: Add in 3D database disk requirements and LHCb and ATLAS quantitative requirements for 3Q.

Updated 25.05.2007: Change date of CMS CSA07 from July to September and precise the expected data rates.

Updated 6.3.2007: Add plans for CMS 5-week cycles and CSA07 and indicators of ALICE p-p and LHCb dress-rehearsals.

Updated 27.02.2007: Precise plans for Atlas February/March Data Distribution tests (see https://twiki.cern.ch/twiki/bin/view/Atlas/TierZero20071). Change Atlas share from 13.5% to 13%.

Updated 15.01.2007: Move the ATLAS Tier0 export tests from 15 Jan to new preliminary date of end Feb.

Updated: 28.11.2006: For CMS request backup to tape by end of year of CSA06 data and add activity plans for December and preliminary plans for the first 6 months of 2007. CMS expect to use up to the MoU pledged resources per site in 2007.

Updated 17.11.2006: For ATLAS revise (downwards, especially in disk) MC requirements for first half of 2007.

Updated 2.11.2006: For ATLAS revise 4Q2006 MC requirements, add MC plans up to mid-2007 and add January 2007 Tier-0 and export exercise.

Updated 27.10.2006: for ALICE continue the data export tests till end 2006 and add resource requirements for all of 2007.

Updated 23.10.2006: add/change LHCB requirements for Oct to April 2007 from the spreadsheet of 26 Sep 2006.

Updated 01.09.2006: add LHCB requirements for Oct/Nov/Dec from the July spreadsheet.

Updated 18.08.2006: extend ALICE data export till August, continue ATLAS data export till end September, move CMS raw data export to second half of August and clarify resource requirements and mid-November end date for CMS CSA06.

Updated 10.07.2006: replace LHCB spreadsheet with version of 7 July 2006

Updated 12 June to update Atlas June and CMS and ALICE July plans.

Updated 22.05.2006: replace LHCB spreadsheet with version of 11 May 2006

Updated 8 May to add link to LHCB detailed planning spreadsheet to the header of the site LHCB Requirements.

IN2P3-Lyon Site Resource Requirements Timetable for 2006/2007

Tier 1 IN2P3-Lyon. To provide 9% of ALICE Resources To provide 13% of ATLAS resources To provide 10% of CMS resources To provide 27% of LHCB resources  
Month ALICE Requirements ATLAS Requirements CMS Requirements LHCB Requirements (See LHCb070529.xls) Tier 0 Requirements
March 2006          
April Run Monte Carlo jobs on 220 KSi2K of cpu with average rate of 7 MB/sec sending these data back to CERN. Network/reconstruction stress test: run 22400 jobs/day on 220 KSi2K of cpu with 7 MB/sec rate from Tier 0 Provide 133 KSi2K of cpu for MC event generation and 8 TB of disk and 20 TB of tape for MC data for this quarter 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Data to tape from Tier 0 at 15 MB/sec (may be part of SC4) Provide 115 KSi2K of cpu for MC event generation 3rd to 16th CERN disk-disk at 200 MB/sec. 18th to 24th CERN disk-tape at 75 MB/sec
May   Provide 133 KSi2K of cpu for MC event generation 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk Provide 115 KSi2K of cpu for MC event generation CERN background disk-disk top up to 200 MB/sec
June   Provide 133 KSi2K of cpu for MC event generation. From 19 June to 7 July T0 to T1 tests take 43.2 MB/sec "Raw" to tape (rate to be reported), ESD at 27.0 MB/s to disk and AOD at 20 MB/s to disk from Tier 0 (total rate 90.2 MB/s). These data can be deleted after 24 hours 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. SC3 functionality rerun. Run 2500 jobs/day at end June Get 6.3 MB/sec of "raw" data from CERN and store 5 TB on tape. Reconstruct and strip these data on 21.5 KSi2K of cpu. Provide 93.5 KSi2K of cpu for MC event generation with 3.5 TB to tape CERN background disk-disk top up to 200 MB/sec
July From 24 July to 6 August take 60 MB/s of raw and ESD data (20% of total) from CERN. These data can be deleted immediately. Tier 1 to Tier 1 and Tier 2 tests. Repeat April network/reconstruction stress test. Provide 144 KSi2K of cpu for MC event generation and 11 TB of disk and 27 TB of tape for MC data for this quarter. "Raw" reconstruction setting up - stagein from tape using 1-2 drives 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Monte Carlo from Tier 2 incoming sent on to CERN. Test Tier 2 to Tier 1 transfers at 10 MB/sec per Tier 2. Last 2 weeks take 'raw' data from CERN to tape at 25 MB/s Get 6.3 MB/sec of "raw" data from CERN and store 5 TB on tape. Reconstruct and strip these data on 21.5 KSi2K of cpu. Provide 93.5 KSi2K of cpu for MC event generation with 3.5 TB to tape CERN background disk-disk top up to 200 MB/sec
August Continue the July export tests until the 60 MB/s rate has been reached for a sufficient period. Provide 144 KSi2K of cpu for MC event generation. Two slots of 3 days of "raw" reconstruction - stagein from tape using 1-2 drives. Analysis tests - 20 MB/sec incoming - will include scalability tests and prefers to be only Atlas grid activity. Take 43.2 MB/sec "Raw" to tape (rate to be reported), ESD at 27.0 MB/s to disk and AOD at 20 MB/s to disk from Tier 0 (total rate 90.2 MB/s). These data can be deleted after 24 hours 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Monte Carlo from Tier 2 incoming sent on to CERN. Test Tier 2 to Tier 1 transfers at 10 MB/sec per Tier 2. Last 2 weeks (after high rate T0-T1disk-disk tests) take 'raw' data from CERN to tape at 25 MB/s (data can be deleted after 24 hours). Analysis of reconstructed data. Provide 115 KSi2K of cpu for MC event generation with 4 TB to tape CERN background disk-disk top up to 200 MB/sec
September Scheduled analysis tests. 10.5 TB of local data, run 16000 jobs/day on 220 KSi2K of cpu. Aggregate internal worker node rate of 2.7 GB/sec. Provide 144 KSi2K of cpu for MC event generation. Take 43.2 MB/sec "Raw" to tape (rate to be reported), ESD at 27.0 MB/s to disk and AOD at 20 MB/s to disk from Tier 0 (total rate 90.2 MB/s). These data can be deleted after 24 hours 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Till mid-September take 'raw' data from CERN to tape at 25 MB/s (data can be deleted after 24 hours). From mid-September ramp up to 1 October start of CSA06 at 750 jobs/day (requiring 180 KSi2K of cpu and a total of 70 TB of disk storage). Provide 115 KSi2K of cpu for analysis of reconstructed data and MC event generation with an additional 3.5 TB to tape CERN background disk-disk top up to 200 MB/sec.
October Continue the data export tests until the 60 MB/s rate has been reached for a sufficient period. Scheduled analysis tests. Reprocessing tests - 20 MB/sec incoming 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Continue CSA06 at 750 jobs/day (requiring 180 KSi2K of cpu and a total of 70 TB of disk storage over CSA06). Provide 254 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 2.6 TB of tape and 0.3 TB of disk. CERN background disk-disk top up to 200MB/sec
November Continue the data export tests until the 60 MB/s rate has been reached for a sufficient period. Scheduled analysis tests. Provide 175 KSi2K of cpu and an additional 2.7 TB of permanent disk and 2.8 TB of temporary (till reconstruction is run) disk plus an additional 4.5 TB of permanent tape storage for MC event generation. Analysis tests - 20 MB/sec incoming at the same time as reprocessing continues 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Demonstrate 30 MB/sec from Tier 0 to tape (would like this to be an SC4 activity). Continue CSA06 at 750 jobs/day (requiring 180 KSi2K of cpu and a total of 70 TB of disk storage over CSA06) till mid-November Provide 257 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 2.7 TB of tape and 0.9 TB of disk. CERN background disk-disk top up to 200MB/sec
December Continue the data export tests until the 60 MB/s rate has been reached for a sufficient period. Scheduled analysis tests. Provide 175 KSi2K of cpu and an additional 2.7 TB of permanent disk and 2.8 TB of temporary (till reconstruction is run) disk plus an additional 4.5 TB of permanent tape storage for MC event generation. Backup the October CSA06 disk files of 70TB to new permanent tape storage. Provide 32 KSi2K of cpu and an additional 2.5 TB of permanent tape storage for MC event generation. Provide 415 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 4.4 TB of tape and 10.3 TB of disk. CERN background disk-disk top up to 200MB/sec
January 2007 During first quarter build up to a data challenge of 75% of the last quarter (data taking) capacity using new site capacity as and when available. Require up to 400 KSi2K cpu, 161 TB disk and 319 TB tape at IN2P3. Export rate from CERN to IN2P3 will be 38 MB/s. Provide 234 KSi2K of cpu each month and an additional 10.8 TB of permanent disk plus an additional 18.1 TB of permanent tape storage for this quarter for MC event generation. Provide 96 KSi2K of cpu per month and an additional 23 TB of permanent tape storage for this quarter for MC event generation. Provide 417 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 4.4 TB of tape and 12.1 TB of disk. CERN background disk-disk top up to 200MB/sec
February During first quarter build up to a data challenge of 75% of the last quarter (data taking) capacity using new site capacity as and when available. Require up to 400 KSi2K cpu, 161 TB disk and 319 TB tape at IN2P3. Export rate from CERN to IN2P3 will be 38 MB/s. Provide 234 KSi2K of cpu for MC event generation. From 26 Feb begin 4 week data distribution tests. Rampup to full 2008 rate from Tier 0 during first week. Raw from Tier 0 to reach 42 MB/s, ESD to reach 52 MB/s and AOD to reach 20 MB/s. Raw data to go to tape then can be recycled. ESD and AOD to go to disk and can be recycled but during last two weeks AOD should be distributed to associated Tier 2, requiring up to 5.2 TB of disk buffer, before being recycled. Provide 96 KSi2K of cpu for MC event generation. On 12 Feb begin first LoadTest07 5-week cycle (see CMS plans). Provide 417 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 4.4 TB of tape and 12.1 TB of disk. CERN background disk-disk top up to 200MB/sec
March During first quarter build up to a data challenge of 75% of the last quarter (data taking) capacity using new site capacity as and when available. Require up to 400 KSi2K cpu, 161 TB disk and 319 TB tape at IN2P3. From 26 March for 7 days participate in WLCG multi-VO 65% milestone so import at 6 MB/s from CERN. Provide 234 KSi2K of cpu for MC event generation. Continue 4 week data distribution tests till 26 March then participate in all-experiment service challenge milestone taking 65% of the average 2008 rate as above but without AOD redistribution for the next 7 days. Provide 96 KSi2K of cpu for MC event generation. On 19 March begin second LoadTest07 5-week cycle (see CMS plans). From 26 March for 7 days participate in WLCG multi-VO 65% milestone so import at 21 MB/s from CERN. Provide 408 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 2.7 TB of tape and 10.3 TB of disk. CERN background disk-disk top up to 200MB/sec
April Require up to 400 KSi2K cpu, 161 TB disk and 319 TB tape at IN2P3. Starting in April and continuing throughout the year build up to full-scale dress rehearsal of p-p running with raw data (at 9 MB/s) and ESD (an additional 10% of the raw) import from CERN, reconstruction at Tier-1 and user analysis and simulation at Tier-2. The data are to be stored in a Tape1Disk1 class storage but where ALICE will manage the disk space. Provide 467 KSi2K of cpu each month and an additional 21.5 TB of permanent disk plus an additional 36.1 TB of permanent tape storage for this quarter for MC event generation. Provide a permanent 300 GB of disk space for ATLAS conditions and event tag databases. Provide 96 KSi2K of cpu and an additional 8 TB of permanent tape storage for MC event generation. Provide a permanent 300 GB of disk space for CMS conditions databases. Provide 408 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 2.7 TB of tape and 10.3 TB of disk. Provide a permanent 100 GB of disk space for LHCb conditions and LFC databases. CERN background disk-disk top up to 200MB/sec
May Require up to 400 KSi2K cpu, 161 TB disk and 319 TB tape at IN2P3. Export rate from CERN to IN2P3 will be 38 MB/s. Provide 467 KSi2K of cpu for MC event generation. Repeat February/March data distribution tests. Provide 128 KSi2K of cpu and an additional 10 TB of permanent tape storage for MC event generation. Provide 37 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 5.3 TB of disk. CERN background disk-disk top up to 200MB/sec
June Require up to 400 KSi2K cpu, 161 TB disk and 319 TB tape at IN2P3. Export rate from CERN to IN2P3 will be 38 MB/s. Provide 467 KSi2K of cpu for MC event generation. Provide 160 KSi2K of cpu and an additional 12.5 TB of permanent tape storage for MC event generation. Start import of simulated raw data from CERN at 10.5 MB/s. Provide 37 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 5.3 TB of disk. CERN background disk-disk top up to 200MB/sec
July Require up to 400 KSi2K cpu, 161 TB disk and 319 TB tape at IN2P3. Export rate from CERN to IN2P3 will be 38 MB/s. Start full scale (2008 running) dress rehearsal. Provide 160 KSi2K of cpu and an additional 12.5 TB of permanent tape storage for MC event generation. Continue import of simulated raw data from CERN at 10.5 MB/s. Provide 58 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 0.3 TB of disk plus 5.7 TB of temporary disk. CERN background disk-disk top up to 200MB/sec
August Require up to 400 KSi2K cpu, 161 TB disk and 319 TB tape at IN2P3. Export rate from CERN to IN2P3 will be 38 MB/s. Continue rampup of full scale dress rehearsal. Provide 160 KSi2K of cpu and an additional 12.5 TB of permanent tape storage for MC event generation. Provide 29 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 0.3 TB of disk. CERN background disk-disk top up to 200MB/sec
September Require up to 400 KSi2K cpu, 161 TB disk and 319 TB tape at IN2P3. Export rate from CERN to IN2P3 will be 38 MB/s. Reach rates of full scale dress rehearsal. Take raw data from CERN (raw is to go to tape) at 41.6 MB/sec, ESD at 52 MB/sec and AOD at 20 MB/sec. Send and receive data from Tier-1 and Tier-2 according to the Megatable spreadsheet values (see link on first page of this Twiki). Starting 10 September perform 30-day run of CSA07 at twice the rate of CSA06 and adding Tier-1 to Tier-1 and to Tier-2 transfers. Import prompt reco events from Tier-0 at 32 MB/s to go to tape to be deleted when site requires. Run 2500 jobs/day including re-reconstruction and store these data on disk until they have been exported to other Tier-1 at 36 MB/s. Import similar data from other Tier-1 at 38 MB/s. Export samples to Tier-2 at 65 MB/s and import Monte-Carlo from Tier-2 to Tape1Disk0 class storage at 35 MB/s. Provide 72 KSi2K of cpu for stripping, reconstruction and analysis with an additional 1.2 TB of tape and 4 TB of disk. CERN background disk-disk top up to 200MB/sec
October Require up to 400 KSi2K cpu, 161 TB disk and 319 TB tape at IN2P3. Export rate from CERN to IN2P3 will be 38 MB/s. Stable running of full scale dress rehearsal. Continue and finish CSA07. Provide 37 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.7 TB of tape and 5.3 TB of disk. CERN background disk-disk top up to 200MB/sec
November For data taking startup require 534 KSi2K cpu, 215 TB disk and 426 TB tape at IN2P3. Export rate from CERN to IN2P3 will be 50 MB/s. Engineering run. Provide a permanent 1000 GB of disk space for ATLAS conditions and event tag databases.   Provide a permanent 300 GB of disk space for LHCb conditions and LFC databases. Provide 29 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 0.3 TB of disk. CERN background disk-disk top up to 200MB/sec
December For data taking startup require 534 KSi2K cpu, 215 TB disk and 426 TB tape at IN2P3. Export rate from CERN to IN2P3 will be 50 MB/s. Engineering run.   Provide 29 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 0.3 TB of disk. CERN background disk-disk top up to 200MB/sec
Edit | Attach | Watch | Print version | History: r37 | r35 < r34 < r33 < r32 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r33 - 2007-06-04 - HarryRenshall
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback