Computing Technical Design Report

5.1 Introduction

The ATLAS Computing Model foresees the distribution of raw and processed data to Tier-1 and Tier-2 centres, so as to be able to exploit fully the computing resources that are made available to the Collaboration. Additional computing resources will be available for data processing and analysis at Tier-3 centres and other computing facilities to which ATLAS may have access.

A complex set of tools and distributed services, enabling the automatic distribution and processing of the large amounts of data, has been developed and deployed by ATLAS in cooperation with the LHC Computing Grid (LCG) Project and with the middleware providers of the three large Grid infrastructures we use: EGEE, OSG and NorduGrid. The tools are designed in a flexible way, in order to have the possibility to extend them to use other types of Grid middleware in the future. These tools, and the service infrastructure on which they depend, were initially developed in the context of centrally managed, distributed Monte Carlo production exercises. They will be re-used wherever possible to create systems and tools for individual user access to data and compute resources, providing a distributed analysis environment for general usage by the ATLAS Collaboration.

This chapter describes the relations between the ATLAS Computing system, the LCG and the Grid middleware projects, and the requirements placed on those projects. It also describes the architecture, current implementation, and planned future developments of the ATLAS distributed production and analysis systems. The first version of the production system was deployed in Summer 2004 and used in the second half of 2004. It has been used for Data Challenge 2 (DC2), for the production of simulated data for the ATLAS Physics Workshop (Rome, June 2005), and for the reconstruction and analysis of the 2004 Combined Test Beam (CTB) data.



4 July 2005 - WebMaster

Copyright © CERN 2005