ATLAS First Run Computing Requirements

The following are the resource expectations for the first run in 2007/2008. The model and numbers are essentially those of the document `Principles of Cost Sharing for the ATLAS Offline Computing Resources', http://www.quark.lu.se/~torsten/Computing-resources.pdf. However, there has been some rebalancing between tape and disk usage. The most obvious changes are: the reduction to one full copy of the raw data stored at CERN, and the assumed trigger rate and event sizes have been reduced. Partial copies of the full ESD are stored at each Tier-1, and it is assumed that there is inter-access between the Tier-1 facilities. In this way, disk resident multiple copies of each event are always available at ESD level to the Tier-1 cloud.

We assume 1E7 seconds of running spread over the second half of 2007 and the first of 2008. The following `per event' numbers are assumed. They are based on the optimised FORTRAN code. At present, the ATLAS Object Oriented code does not run this quickly. Further, until the POOL integration exercise is complete, we will not have a better idea of the true event size at the various processing stages. 

Item

Unit

Value

Raw Data Size

MB

1,6

ESD Size

MB

0,5

AOD Size

kB

10

TAG Size

kB

0,1

Sim. Data Size

MB

2,0

Sim. ESD Size

MB

0,5

Time/Reco 1ev

kSI95-sec

0,64

Time/Simu 1ev

kSI95-sec

3,00

 

 

 

 

 

 

 

 

 

The following gives the `external' inputs such as the data rate from the detector. The trigger rate has changed since the `Hoffman' LHC computing review.

 

Unit

2007

2008

Average Luminosity

10³³

1

1

Trigger Rate

Hz

160

160

Physics Rate

Hz

140

140

Equivalent Days of Running

 

50

50

Nr.of Recorded Events

109

0,8

0,8

Nr.of Events used for physics analyses

109

 

0,7

0,7

For the Tier-1 and Tier-2 resources, a major component is the user analysis activity. The numbers for the Tier-1 analysis activity is taken from the LHC computing review and the Principles of Cost Sharing document. However, these need revision, and we are in the process of capturing the physics resource usage patterns for our recent Physics Workshop as a guide. The Tier-2 facilities will be used even more intensively for the physics analysis. We have therefore assumed somewhat larger resource allocations per physicist in the Tier-2s, which are not an unreasonable extrapolation of the University growth in University resources per physicist in recent years.

Disk (TB) Tape (TB) CPU(kSI2k) CPU(kSI2k-y)
Pro Tier-1 User 0.1 0.2 9.0 4.5 assume 6 months active
Pro Tier-2 User 0.5 0.5 9.0 4.5 assume 6 months active

Tier-0

It is assumed that the Tier-0 will hold the raw data on tape and the calibration data on disk. It will also have a full copy of the ESD on disk and the previous version on automatic tape. The Tier-0 CPU will perform the first pass reconstruction and one reprocessing. 

 

CERN T0 : Storage and computing requirement

         
  ESD(Current) ESD(Previous Raw + Calib Total(TB)
Autom. Tape (TB) 0 800
3216
4000
Shelf Tape (TB) 0 0 2816 2816
Disk (Tb) 800 0 40 840
  Reconstruction Reprocessing Calibration Total
CPU (KSI2K) 1014 1037 450 2501
CPU (KSI2K-y) 19 259 0.25 279

 

CERN Tier-1

The CERN `Tier-1' will hold 1/3 of the current ESD on disk and 1/6 of the previous ESD on disk and on tape, which will also be true for the external Tier-1 facilities. This saves tape storage, as there will always be multiple copies of a given ESD event in the current processing on disk in the Tier-0 and Tier-1s.  The detailed assumptions are as follows:

CERN T1 Storage and computing requirement

 
 

Disk (TB)

Auto.Tape (TB)
1/3 Current General ESD 233
1/6 Previous General ESD 117 117
Full AOD 14
Full TAG 1
 Simulation:
User Activity 20
1/6 Previous Simulated ESD 10
Simulated AOD 1
Simulated Tag 0
User Data (300 users) 30 60
 
Total 427 177
 

Users (300)

Total

CPU (KSI) 2700 2700
CPU (kSI2k-y) 1350 1350

The total required CERN resources are given at the end. These are the inputs to the LCG cost-to-completion exercise.

External Tier-1

The Tier-1 resources are assumed to provide both the simulation and be used for analysis. As yet we do not plan to have copies of the raw data in the Tier-1s, other than small samples. It is envisioned that there will be approximately 6 external Tier-1 facilities. The resource expectations are as follows:

External T1 : Storage and computing requirement
 
  Disk (TB) Auto.Tape (TB)
1/3 Current General ESD 233 0
1/6 Previous General ESD 117 117
Full AOD 14 14
Full TAG 1 1
RAW Data (sample) 4 40
 Simulation:
1/3 Current Simulated ESD 20 10
1/6 Previous Simulated ESD 10 10
Simulated AOD 1 1
Simulated Tag 0 0
 User analysis;
User Data (200 users) 20 40
 
Total 421 233
 
  Simulation Simulation Reconstruction. Repeat. Simulation Users (100) Total
CPU (KSI2k) 35 7 15 900 957
CPU (KSI2k-y) 11.57407407 2.469135802 4.938272 225 244

Tier-2

We picture four Tier-2 facilities for each Tier-1 on average, each with about 50 users. Again, these are used for simulation and analysis. The expected resource requirements are as follows:

External T2 : Storage and computing requirement
 
  Disk (TB) Auto.Tape (TB)
1/3 Current General ESD 23 0
1/6 Current General ESD  12 12
Full AOD 14 14
Full TAG 1 1
RAW Data (sample) 4 4
 
1/30 Current Simulated ESD 20 20
1/60 Previous Simulated ESD 1 1
Simulated AOD 1 1
Simulated Tag 0 0
 
User Data (50 users) 25 25
 
Total 102 78
 
  Simulation Rec. Sim. Rep. Sim Users (50) Total
CPU (KSI2k) 35 7 15 450 507
CPU(kSI2k-y) 2.893518519 0.617283951 1.234568 67.5 72

Summary

The total resource requirements are summarised below:

Summary of Resources Requirements in 2007/2008

 
  Raw + Calibration CERN T0 CERN T1 CERN (tot.) Each External T1 Sum All T0/1 Each T2 Total T2 Grand Total
Autom.Tape (TB) 3216 800 177 4193 233 5593 78 1881 7474.3
Shelf. Tape (TB) 2816 0 0 2816 0 2816 0 0 2816.0
Disk (TB) (TB) 40 800 427 1267 421 3791 102 2441 6232.3
CPU (KSI) (TB) 0 2501 2700 5201 957 10942 507 12167 23109.1
CPY(kSI2k-y) 0 279 1350 1629 244 3093 72 1734 4827.0

 

This page was constructed for the Computing Model Group by Roger Jones (Roger.Jones@cern.ch) on 16th August 2003