CERN | ATLAS | Notes/Publications/TDRs


Computing Technical Design Report

A.2 Definitions

A

AOD

The Analysis Object Data, which consists of a reduced size output of physics quantities from the reconstruction that should suffice for most kinds of analysis work.

See also ESD and Section 3.9.3.1 .

Athena

The software framework for ATLAS. Based on Gaudi, Athena provides common services such as the transient data store, (interactive) job configuration and auditing, data(-base) access, message streams, etc. for ATLAS software. The idea is to improve the coherency of the different software domains within ATLAS, and thereby the ease of use for end-users and developers, by having them all use the same well-defined interfaces.

See also Gaudi and Section 3.3

Atlfast

The ATLAS fast simulation program (Atlfast) simulates ATLAS physics events, including the effects due to detector response and the software reconstruction chain. The input to the program is the collection of four-vectors for a physics event, usually provided by a physics event generator. Smearing and jet finding algorithms are applied to the energy deposits, and the resulting jet objects are output for further physics analysis.

See also Section 3.8.3

 

C

Certificate

A public-key certificate is a digitally signed statement from one entity (e.g., a certificate authority), saying that the public key (and some other information) of another entity (e.g., the Grid user) has some specific value. The X.509 standard defines what information can go into a certificate, and describes how to write it down (the data format).

CMT

A Configuration Management Tool: it is used for building software releases, for package dependency management, and for the setup of the run-time environment. Practically everything is treated as a package by CMT, even the run-time environment which is setup by "compiling" a run-time package. Much of CMT's behaviour (and hence how it should be used and configured) is determined by project-specific policy files.

Compute Element (CE)

Compute element (also called a compute service or CS) is a term used in Grids to denote any kind of computing interface, e.g., a job entry or batch system. A compute element consists of one or more similar machines, managed by a single scheduler/job queue, which is set up to accept and run Grid jobs. The machines do not need to be identical, but must have the same OS and the same processor architecture. A CE must run (among other things) a process called the "gatekeeper".

Conditions Database (CondDB)

The conditions database contains the record of the detector conditions required for data analysis, e.g. calibration and geometry constants.

See also Section 4.4 , "Conditions and Configuration Data"

CVS

A concurrent versions system that allows for sharing of source code among the members of a distributed development team, such as ATLAS. All offline code resides in the official repository, which can be browsed online and from which users can download ("check-out") code.

See also CMT.

D

Data Acquisition System (DAQ)

The detector system comprising the Data Flow, Online and Detector Control Systems.

Data Challenge (DC)

A data challenge comprises, in essence, the simulation, done as realistically as possible, of data (events) from the detector, followed by the processing of those data using the software and computing infrastructure that will, with further development, be used for the real data when the LHC starts operating. The goals of the ATLAS Data Challenges are the validation of the ATLAS Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical computing choices to be made. In addition the data produced allowed physicists to perform studies of the detector and of different physics channels.

See also Section 6.5 , " Experience with Data Challenges and other Mass Productions".

 

Dictionary

Also known as reflection information, a dictionary refers to an object that contains a detailed description of another object. This information includes the object type, member functions and arguments, member data, etc. There are two prevailing dictionaries in HEP: the one provided by SEAL (typically referred to as the LCG dictionary) and the one provided by ROOT.

See also CINT, LCG, SEAL, ROOT.

Distribution Kit

The ATLAS software is distributable through a set of tools, collectively referred to as the "Distribution Kit." Basically, it contains all the binaries and header files for a specific platform. The distribution kit can be downloaded from a website and installed with Pacman.

See also Pacman and Section 3.14.6 , "Code Distribution".

E

Event Filter (EF)

Sub-system of HLT comprising the hardware and software for the final stage of the online event selection, data monitoring and calibration using offline style algorithms operating on complete events accepted by LVL2.

ESD

The Event Summary Data, which consists of sufficient information to re-run (parts of) the reconstruction. For some kinds of analysis, the information available in the AOD may not be enough. By selecting information from the ESD (or by selecting certain reconstruction algorithms and hence, transparently, the required ESD), rather than going back to the original byte stream, the needed I/O can still be kept to a minimum.

See also AOD and Section 3.9.2 .

G

Gaudi

The software framework for LHCb. The Gaudi framework was originally developed by LHCb, but is now in use by several experiments, including ATLAS. When referring to Gaudi in the context of Athena, the term is meant to describe the common core of the two frameworks. For more information, see the Gaudi project site.

See also Athena.

Geant4

A toolkit for building geometries of physical structures, e.g. the ATLAS detector, and simulating the physics processes that occur as particles traverse through these structures. For use within ATLAS, see Section 3.8.4 .

Generator

Short for "event generator." A computer program that models the physics processes that occur in the collisions in high-energy physics experiments. The results of running a generator consist of a list of particles (incoming, created, decay products, etc.), their properties, and their origins, just after the collision.

See also Pythia.

GeoModel

The GeoModel toolkit is a library of geometrical primitives. The toolkit is designed as a data layer, and especially optimized in order to be able to describe large and complex detector systems with minimum memory consumption. GeoModel is used for the description of the ATLAS detector geometry. A faithful representation of a GeoModel description can be transferred to Geant4.

See also Section 3.5.2

Grid

An infrastructure of computing resources, such as storage and processing time, that is transparently made available to the user through a network.

H

HepMC

An event record for high-energy physics Monte Carlo generators. HepMC stores the output from an event generator as a graph of particles and vertices, where the vertices maintain a listings of the incoming and outgoing particles and the particles point back to their production vertices.

High-Level Triggers (HLT)

Trigger/DAQ system, comprised of both the LVL2 and EF, the two ATLAS trigger levels that are implemented primarily in software.

J

Job Options

In the Athena framework a job is specified in terms of a set of job options (as Python statements). Via job options the dynamic libraries to load are specified, the algorithms are selected and the sequence of execution and the properties of the algorithms are defined.

L

LCG

The LHC Computing Grid Project, which intends to serve the computing needs of LHC by deploying a world-wide Grid, integrating the capacity of scientific computing centres spread across Europe, America and Asia into a virtual computing organization.

See also Grid, POOL, SEAL.

Level-1 Trigger (LVL1)

The ATLAS First Level Trigger system provides a hardware based first trigger decision using only a sub-set of an event's data (Calorimeter and Muon only). Normally, only the events accepted by LVL1 are transferred from the detectors into the HLT/DAQ system.

Level-2 Trigger (LVL2)

Sub-system of HLT which provides a software based second stage trigger decision, to reduce the rate of triggers from LVL1 by about a factor of 100. It uses `Regions of Interest' (RoIs) as given by the LVL1 trigger to selectively read out only certain parts of the ATLAS detector hardware and computes a LVL2 trigger decision.

LSF

The Load Sharing Facility is a batch queuing system that regulates the use of computing resources. Jobs are submitted into queues based on their expected load and executed in an as fair as possible scheme following pre-set priorities. Detailed information and reference guides are available at the CERN batch services site.

M

Middleware

Middleware is software that connects two or more otherwise separate applications across the Internet or local area networks. More specifically, the term refers to an evolving layer of services that resides between the network and more traditional applications for managing security, access and information exchange.

P

Pacman

A PACkage MANager, which allows one to install parts of, or the whole, ATLAS offline software release on a local machine.

See also Distribution Kit.

POOL

A Pool Of persistent Objects for LHC, POOL is a persistency framework designed for the LHC experiments and allows for the storage of the large volumes of experiment data and metadata in a Grid enabled way. Normally, within the Athena framework, one does not deal directly with POOL. Instead, POOL is specified as a conversion service (that is, the facility that converts the transient data into persistent data) in the configuration of the job and, unless one has special needs, its use is transparent.

See also Athena, LCG and Section 3.15.2 , Section 4.5 .

Pythia

A program for the generation of high-energy physics events. The simulation of events as they will take place in the ATLAS detector, starts with the simulation of the collisions themselves. This is done in event generators such as Pythia, which contains theory and models for e.g. hard and soft interactions, parton distributions, initial and final state parton showers, particle decay, etc. The output of an event generator can subsequently be fed into the ATLAS detector simulation, which tracks the particles from the events through ATLAS. Detailed information is available on the Pythia site. There is also ATLAS specific information, which describes the Pythia_i interface package for use with Athena.

Python

An interpreted, interactive, object-oriented, open-source programming language. The Python interpreter is used as the scripting interface for Athena, because of its excellent extending and embedding facilities.

ROOT

A class library for data analysis. The ROOT package provides an extensive set of functionality for data analysis in high-energy physics; it includes data display, persistency, minimization, fundamental classes, etc., as well as an interactive interpreter. The full class reference, tutorials, and other documentation, are available at the ROOT project site.

S

SEAL

The Shared Environment for Applications at LHC, which provides common core and services (such as system, utility, framework, and mathematics) libraries for LCG applications in order to improve their coherency. For the ATLAS end-user, the most visible products of SEAL are the LCG dictionary and the scripting services such as PyROOT and PyBus.

See also LCG and Section 3.15.1 .

SRM

Grid middleware component Storage Resource Manager, used for data management and virtualization of storage interfaces. SRM provides shared storage resource allocation and scheduling. It manages space, manages files on behalf of users, manages file sharing, manages multi-file requests, and provides Grid access to and from a mass storage system. An SRM does not perform file transfers, rather it invokes file transfer services as needed, monitors transfers and recovers from failures.

Storage Element (SE)

Any data storage resource that is registered in a Grid Information Service (GIS), contains files registered in a Replica Location Service (RLS), and provides access to remote sites via a Grid interface (e.g., GSI authenticated). A Storage Element (SE) provides uniform access and services to large storage spaces. The storage element may control large disk arrays, mass storage systems and the like.

StoreGate

StoreGate implements a transient data store for Athena and is described in the Athena manual. For more information, see Section 3.4.2 .

See also Athena, TDS.

T

TAG

Used in many contexts, here we mean event-level metadata, i.e. global event quantities with reference to the event AOD. TAG databases are used for fast event selection and creation of event sample lists, see Section 4.5.8 , "Event-level Metadata".

TDS

The Transient Data Store (TDS) is a box where data producers can drop off their results to be picked up by clients (more often referred to as consumers, even though they don't actually consume the data). The use of a store decouples producers from clients. Examples of producers include algorithms that perform calculations to come up with new data, but also services that read data from disk. Clients can be other algorithms that perform further calculations, or services that write data to disk or a database. In Athena, the transient store is implemented by StoreGate.

See also Athena, StoreGate.

Tier-0

Initial Tier in the Grid hierarchy; it is the site at which raw data is taken. The experimental online system interfaces to the Tier-0 resources. For the LHC experiments, CERN is the Tier-0 facility.

Tier-1

Next Tier, after Tier-0, in Grid hierarchy. Tier-1 sites are connected to a Tier-0 site based on an MoU with the Tier-0 site. Typically a Tier-1 site offers storage, analysis, and services, and represents a broad constituency (e.g., there may be a single Tier-1 site per country or region which connects with multiple Tier-2 sites in that country or region). For ATLAS, 10 Tier-1 sites are planned.

Tier-2

Tier-2 is the next level down in the Grid hierarchy of sites, after Tier-1. Tier-2 sites are typically regional computing facilities at University institutions providing a distributed Grid of facilities.

V

Virtual Organization (VO)

A participating organization in a Grid to which Grid end users must be registered and authenticated in order to gain access to the Grid's resources. A VO must establish resource-usage agreements with Grid resource providers. Members of a VO may come from many different home institutions, may have in common only a general interest or goal (e.g., ATLAS physics analysis), and may communicate and coordinate their work solely through information technology (hence the term virtual). An organization like an HEP experiment can be regarded as one VO.




4 July 2005 - WebMaster

Copyright © CERN 2005