CERN Accelerating science

This website is no longer maintained. Its content may be obsolete. Please visit http://home.cern/ for current CERN information.

CERN home pageCERN home pageDocuments by ReferenceDocuments by ReferenceCNLsCNLsYear 2001Year 2001Help, Info about this page

Contents
Index

Editorial Information
Editorial
If you need help
Announcements Special 35th Anniversary Physics Computing Desktop Computing Internet Services and Network Scientific Applications and Software Engineering Desktop Publishing The Learning Zone User Documentation Just For Fun ...
Previous:CERN Council Gives Go Ahead for LHC Computing Grid Project
Next:Data Services Group Activities and Plans
 (See printing version)



Highlights from CHEP 2001

Vladimir Bahyl, Harry Renshall,
Gabriele Cosmo, and David Myers


Abstract

A short summary of some the presentations given at this year's Computing in High Energy Physics (CHEP 2001) conference in Beijing, as reported by some CERN/IT participants.


Systems and Software

Vladimir Bahyl, IT/PDP

Commodity Hardware and Software

In the Commodity Hardware and Software track, there were several reports on the experiences with running big farms from CERN, DESY, FNAL, BNL, IHEP and KEK. Here are the aspects that were common for most of the talks:
  • it seems that Linux with dual-CPU PCs is the current trend all over the globe
  • while DESY's configuration is based on SuSE, most of the other are using RedHat
  • as for the spread of rack mounted solutions, they are used in the US on considerable scale in contrast to CERN and DESY where there is no problem with space
  • concerning the question of the size, CERN's farm is one of the biggest running, even though not always using the smartest solutions
The only common technology used seems to be AFS: for monitoring, job control and data delivery everybody uses something different (often made at the particular institute).

Other interesting talks

Robert Cowles from SLAC gave a very interesting technology review for Security of the data grids. He discussed possible authorisation and authentication solutions based on PKI.

Marcin Nowak from CERN presented a talk on object features of Oracle 9i like object modelling (according to SQL 1999 standard), Oracle Type Translator for C++ and C++ binding via OCCI that could be useful for the HEP community.

Andrew Hanushevsky from SLAC presented a talk where he discussed usefulness of compressed database support in Objectivity for BaBar experiment database. While compression trades disk space for CPU resources, the trade-off may be worthwhile in certain circumstances. However - the compression should only be considered as an option while the proper design of the structures in the database is the key.

 

Networking

Harry Renshall, IT/DS

There was a half day parallel session on networking. The driving topics were high throughput wide area performance with five papers, and security with three papers.

Several of the performance talks reported on how to get the best out of existing systems for wide area file transfer over TCP and emphasised that current operating systems have bad defaults notably small TCP window sizes. A combination of optimising TCP window size at source and sink (typically to be more than 1MB), using large file sizes (several hundred MB and above) and transferring in multiple streams gave the best results. A user should today be able to transfer 100 GBytes/day between major HEP sites.

Two talks reported easy to use interfaces which do a lot of the optimisation (the window size is a system parameter) namely "bbcp" (BaBar wide area copy) from SLAC and "gsiftp" from Globus. There was also a very interesting wide area networking simulator using an intermediate PC as a router reported from KEK.

The security talks did not report many new developments but all sites recognise the increasing importance of access security and will be using appropriate tools from the basic ones such as SATAN and COPS to implementing strong authentication (at FNAL ) and virtual private networks (KEK).

The IPV6 tutorial was a reminder of industry plans to expand IP addressing from its current restriction of 32 bits to 128 bits as agreed in 1994. Also included is stateless auto-configuration which will allow to rapidly renumber a site. The first users are expected to be the third generation mobile phone developers in Europe and Asia where there is a shortage of address space.

 

Grid Computing

Harry Renshall, IT/DS

There were three busy half day sessions plus some plenary talks. There was a session of experiment or project status reports I did not attend then a session on basic tools and one on progress in middleware.

Steady progress on implementation of the Globus architecture was reported and results of a practical study on dynamic replication strategies within Globus. There was a very detailed practical evaluation of Globus by INFN sites working within the European data grid illustrating their good communications with the Globus team and also a much appreciated talk on a recent analysis of the CMS requirements for the data grid.

Middleware progress talks covered a wide area: integrating grid tools to build a computing resource broker (INFN); sparse query processing models (U. of West of England); parallel ROOT interactive analysis system (CERN); a project for peta-byte scale data intensive analysis (GFARM from KEK); and a grid aware extension of the Liverpool University Monte-Carlo array processor allowing remote job preparation and submission using the Globus toolkit for authentication and communication.

There were several relevant conclusions drawn by the raporteur of the Grid sessions: activities are still mainly concentrated on strategies, architectures and tests. There is general adoption of the Globus layered architecture and basic services. New middleware tools are starting to appear and be used but there are some parallel developments so strong coordination will be needed and we should plan carefully the next iteration of Grid middleware development. There is in general good collaboration between the existing EU and US grid projects that must continue. Experiments are getting on top of grid activities. And finally the grid infancy phase begun at the Padova CHEP has now ended at CHEP Beijing as evidenced by the recent creation of the LHC Computing Grid project.

Software Methodologies and Tools

Gabriele Cosmo, IT/API

In the track for "Software Methodologies and Tools", the need to apply Software Process techniques according to well established and standard procedures was expressed. Software Process Improvement must be part of this and be life-cycle driven.

Concerning OO programming, contributors stressed the importance of utilising loosely coupled components and maximise re-use for the development of applications. In this context, the concepts of 'collaborating frameworks' and 'abstract interfaces' [1] play a central role. The resulting integration of software components, which comply with this rule, is very easy to achieve. At the same time, optimal flexibility and maintainability of the software is assured.

The importance of considering Software Quality and applying it in the normal development process was touched upon. An interesting tool (Ignominy) for quantifying modularity through software metrics was presented. The tool is currently adopted in the context of the CMS software [2].

For Software Management, the features of CMT (Configuration Management Tool) [3] and SCRAM (Software Configuration, Release And Management) [4] packages were presented and contrasted. Site-specific configuration issues for cross-laboratories distribution of the software remain an area of active work.

The Geant4 Toolkit [5] is becoming the emerging standard in HEP for "Simulation". After almost 3 years from its first public release, the software is now used in production on several experiments (ATLAS testbeam, BaBar, HARP) and other intend to follow in the next year. Comparison studies with experimental data were presented with positive results. A variety of applications outside HENP are making use of it, ranging from medical applications to astro-physics studies, thanks to the low-energy extensions of electromagnetic interactions.

In the track for "Analysis Tools", many improvements and new features were implemented in each of the tools presented at the conference: Anaphe/Lizard [6], IGUANA Interactive Analysis [2], Java Analysis Studio (JAS) [7] and ROOT [8]. The Abstract Interfaces for Data Analysis (AIDA) [1] are putting in a strong bid to become the HEP standard for defining the interfaces in the data analysis domain. For HEP graphics toolkits, QT for graphical user interface, and OpenGL/OpenInventor for low/high-level graphics are widely used. Python as scripting language is showing increasing popularity among developers.

[1] http://aida.freehep.org
[2] http://cern.ch/iguana
[3] http://www.lal.in2p3.fr/SI/CMT/CMT.htm
[4] http://cmsdoc.cern.ch/cgi-cmc/scrampage
[5] http://cern.ch/geant4
[6] http://cern.ch/anaphe
[7] http://www-sldnt.slac.stanford.edu/jas
[8] http://root.cern.ch

 

Controls Track Summary

David Myers, IT/CO

Controls seems to attract much more interest from the accelerator community than from experimental physicists, and the accelerator controls people tend to go to the ICALEPCS [1] conference series, rather than to CHEP. Thus, at CHEP, controls seems to be rather a minority interest which is evident from the modest number of contributions. I suspect that this may be due to compartmentalisation into "Slow Control", "DAQ Control", "Run Control", and so forth, rather than realising that there is a great deal of overlap between these areas which I believe to a large extent could use many similar tools and techniques.

There were eleven papers and posters contributed at this conference, plus a plenary talk on the LHC Joint Controls Project, JCOP, and a live demonstration. Of the contributed papers, several discussed interlock and safety systems, one supervised by LabView; one paper dealt with supervision of calibration; one the supervision of a computer farm; and two discussed applications developed with EPICS. Another contribution mentioned CDEV, which was originally a technology developed at Jefferson lab in order to unify different control systems with EPICS. Five papers mention systems developed using commercial SCADA tools, and a paper from ATLAS discussed how to link purpose-built Run Control with a SCADA system. Finally, Clara Gaspar from LHCb bravely gave a live demonstration of `Partitioning, Automation and Error Recovery in an LHC Experiment' using the SCADA system chosen by the LHC Joint Controls Project.

Thus, the topics covered a wide area, from classical hardware monitoring and supervision, to interlock systems and supervision of computing farms. Like all areas of HEP, controls software has historically been home-made. However, an unmistakable trend is now towards the use of Supervisory Control And Data Acquisition (SCADA) systems. One of the first of these used in HEP, EPICS, was developed originally at Los Alamos. However, just as we no longer write our own operating systems and compilers (which was still the case when I arrived at CERN), the use of home-made tools for building controls systems is slowly being overtaken by the use of industrial software. The crucial developments which permit this are the features of the latest generations of SCADA software which provide scalability to very large distributed systems and support for object-oriented, or device-oriented, development.

This movement towards industrial software tools is, perhaps, the major conclusion for the future: rather than putting our effort into building purpose-designed tools, we should use tools we can buy and put our effort into solving our controls problems. The cost of implementing and supporting a purpose-built tool is no longer commensurate with its added value. Although not a subject from the conference, it is also worth mentioning that in some areas, such as cryogenics, power distribution and safety, complete systems are now being out-sourced to industry.

Finally, although I must admit to a possible bias, I want to mention the progress reported by Wayne Salter on the LHC Experiment's Joint Controls Project. Not only is this showing that commercial components can be used in many HEP applications, but also that it is possible for experiments to reduce their development and maintenance costs by working together.


[1] International Conference on Accelerator and Large Experimental Physics Control Systems



For matters related to this article please contact the author.
Cnl.Editor@cern.ch


CERN-CNL-2001-003
Vol. XXXVI, issue no 3


Last Updated on Fri Dec 07 14:18:27 CET 2001.
Copyright © CERN 2001 -- European Organization for Nuclear Research