CERN Accelerating science

This website is no longer maintained. Its content may be obsolete. Please visit http://home.cern/ for current CERN information.

CERN home pageCERN home pageDocuments by ReferenceDocuments by ReferenceCNLsCNLsYear 2001Year 2001Help, Info about this page

Contents
Index

Editorial Information
Editorial
If you need help
Announcements Special 35th Anniversary Physics Computing Desktop Computing Internet Services and Network Scientific Applications and Software Engineering Desktop Publishing The Learning Zone User Documentation Just For Fun ...
Previous:Special 35th Anniversary
Next:30 Years of Computing at CERN - Part 3
 (See printing version)



30 Years of Computing at CERN - Part 2

Paolo Zanella , Former Division Leader of DD


Abstract

This is the second of a three-part series, made out of the original (excellent) paper written by Paolo Zanella in 1990. - Miguel Marquina, editor

Note that when the word "present" is used, it refers to "1990".


4. TRANSISTOR MACHINES AND DATA LINKS

The arrival of the 7090 marked the end of the vacuum tube computer. The old Mercury's Autocode service was stopped and the venerable machine was connected directly to a sonic spark chamber experiment at the CERN PS (Missing Mass Spectrometer). It continued to collect data on-line (one event of some 500 bits per burst) for another couple of years. But the second-generation machines were much faster and much more reliable. The mini's appeared on the market and CERN bought its first transistorized minicomputer, the SDS 920, in 1964 to be used on-line to another acoustic spark chamber experiment. It had a core store of 4 Kwords (24 bits), paper tape input/output, two magnetic tape units, a printer, a DMA, an interrupt system... and an assembler. It was capable of reading 50 24-bit words of data per event and recording 12 events per burst on magnetic tape while monitoring the experimental equipment. When it detected a faulty device it alerted the skeptical physicists on shift. It took some time before the on-line system could establish its credibility.

The rest of the 60's saw the decline of film and its replacement by digital events directly acquired by computers. The minis spread all over the experimental floor and by the end of the decade one could count some 50 computers at CERN (mainly from Hewlett-Packard, DEC and IBM). Data were recorded on tape reels and rushed to the computer center to obtain a quick feedback-using special 'priority cards'. This 'pony express' type of traffic became known as 'bicycle-on-line'. The pioneering connection of the Mercury to an experiment had required the design and construction of a data link about one kilometer in length. This marked the beginning of the new era of data communications at CERN.

From then onwards, systems for the transmission of digital data have been successfully developed at CERN. From FOCUS (a CDC 3100 based facility for remote access to, and interactive use of the central computers) to OMNET (a network of PDP-11's clustered around a CII 10070 providing an on-line data acquisition support to the Omega and SFM spectrometers), from CERNET to our current extremely complex system of networks, CERN has been constantly a pioneer in building and using computer communications. From CERNET and its 2.5 Mbits/sec links we learned an important lesson, namely that one could work from one's office or experiment without moving all the time to the computer center. Files could be sent and jobs remotely executed using the CERNET facilities. Recently, the exploding needs of the HEP community have brought high-speed data communications into the limelight. CERN has participated in advanced projects on satellite communications (e.g. STELLA), on LANs and fiber optics.

Today, general purpose networks, LANs and WANs, protocols and standards are part of the experimentalists' life. We should not forget, however, that the beginnings were slow and difficult.

The epic discussions which accompanied the birth of CERNET showed a deeply divided community reluctantly agreeing to make the initial investments. It wasn't until the end of the 70's that the strategic importance of data transmission was fully realized by the most advanced users, the rest joining in during the 80's.

5. CENTRALIZATION AND DECENTRALIZATION

The second half of the 60's were also marked by the introduction of a large central system, the CDC 6600, designed by computer pioneer Seymour Cray. New buzzwords were added to the fast expanding computer jargon, such as multiprogramming, peripheral processors, parallelism, etc. The word length jumped to 60 bits, the number of registers in the CPU became respectable, instructions were fetched ahead, and huge expensive disks appeared, strong enough to be later recycled as tabletops. Most important, we started talking nano-seconds.

The introduction of such a complex system was by no means trivial and CERN, as many other sites, experienced one of the most painful periods of its computing history. The coupling of unstable hardware (we installed 'Serial Number 3', a pre-production series machine) and shaky software resulted in a long traumatic effort to offer a reliable service. To give an idea of the situation, suffice it to say that CDC had to cancel their SIPROS operating system and CERN had to write large portion of system software to be able to use the machine. In the meantime, an emergency service was set up using more conventional CDC machines, such as the 3400, the 3800, and later the 6400.

Eventually the new hardware and software were fully debugged and the 6600 started a long productive career which ended in 1975, 10 years after its installation. But the growing needs of an experimental program, increasingly based on electronic detectors, were calling for another big quantum jump in capacity. In 1972, CERN installed a CDC 7600, the most powerful machine on the market, some 5 times faster than the 6600. It was front-ended by two big computers from the same manufacturer, namely a 6400 and a 6500 equipped to manage all the I/0 traffic (local or coming from remote input/output stations) and to serve the first on-line time-sharing users.

Once more, users and service providers alike, had to go through a very difficult running-in period. The system software was again late and inadequate. In the first months the machine had a bad ground-loop problem causing intermittent faults and eventually requiring all modules to be fitted with sheathed rubber bands! History repeated itself painfully and finally the machinery started crunching an enormous amount of data, and it did it for the longest time of any central computer in CERN until it was turned off in 1984 after over 12 years of service.

In spite of their running-in difficulties, the 6600 and the 7600 were indeed remarkable machines. For 20 years (1965-1984) they played a leading role in computing for High-Energy Physics at CERN as well as at many other laboratories. There was simply nothing comparable on the market. No other machine was so advanced and so fast. The speed of the 7600 processor designed in the 1960's Oust over 10 Mips) was unbeaten throughout the 70's and the early 80's. Even today's fastest scalar machines in the CERN computer center are only two io three times faster.

The lessons from the early 70's made people reflect on the vulnerability of big, complex systems. One abandoned the idea of connecting film scanners, let alone experimental devices, to a large central computer. The trend towards decentralization and separation of functions was irreversibly started. This was supported by the timely arrival of integrated circuits, microprocessors, and powerful minis, which invaded CERN in the late 70's. This vigorous push towards smaller and cheaper computers has continued without interruption during the 80's bringing the current computer population on site to tens of large machines, hundreds of minis, thousands of personal workstations, and an unknown number of intelligent micro-devices unaccounted for in any census, each capable of storing, moving and processing data much faster than any first-generation computer. This trend has, however, not reduced the need for central services. Large expensive data handling systems have continued to be operated centrally. Networks, databases and file systems have been managed from the computer center. Last but not least, the computer center has evolved into a computer and communications competence center, where knowledgeable humanware provides a most valuable interface to that soft and hard world of bits and cables.

6. MULTIVENDOR SYSTEMS INTEGRATION

In the meantime a significant trend-setting event took place in the computer center, namely the comeback of a large IBM system, the 370/168, in 1976. This computer was not meant to replace any of the CDC machines but to co-exist with them. At the time it seemed rather risky to manage two large systems from two competing manufacturers, installed side by side in the same room. There were obvious technical and psychological problems which needed imaginative solutions, a different service approach and a new management style. Faced with such challenges CERN found itself leading the way towards the currently prevailing computing environment made of an interconnected set of heterogeneous systems.

The IBM 370/168 brought to the computer center the silicon chip, the 8-bit byte, the hexadecimal number system, virtual memory, cache memory, a robotized mass storage system, a set of modern magnetic tape and disk drives, a 19000 lines per minute laser printer, WYLBUR, and above all it demonstrated that complex computer hardware could work reliably!

It will also be remembered, long after its decommissioning, as 'the CERN unit' of computing capacity corresponding roughly to 3 Mips (Millions of instructions per second) or to 4 DEC Vups (VAX units of performance). It is amazing to think of this big machine as a major component of the computer center, supporting hundreds of users at the end of the 70's, and to compare it with small, modern personal workstations easily surpassing it in power, in functionality, and providing a user-friendly interface on top.

At the turn of the decade CERN had become a well-known computing outfit where one could see powerful CDC and IBM systems serving a growing number of time-sharing users, but also talking to a large and fast increasing population of minicomputers connected via CERNET. The stage was set for playing the role of the prestigious customer, and the computer center did not miss the opportunity to become one of those places where all the major computer manufacturers would rather not take the risk of being left out...


Appendix 2 (part 2)

CDC 6400/6500 [1967-1980]

The 6400 was architecturally similar to and compatible with the 6600, but less powerful (40% of a 6600). The twin processor version of the 6400 was called 6500 and CERN upgraded its 6400 to a 6500 in 1969 to be used as a backup machine, and later as one of the two front-end to the 7600 (the other being another 6400). They were eventually replaced by a pair of Cyber 170 machines (720 and 730) at the end of the 70's.

CDC 7600 [1972-1984]

Designed by Seymour Cray, the CDC 7600 was an astonishingly compact and elegant machine. It came with 64 Kwords (60 bits) of small core memory with a 275 nsec read/write cycle time (SCM) and 512 Kwords of large core memory with a 1760 nsec cycle time (LCM). Both were still ferrite core memories. Cpu clock cycle = 27.5 nsec. Arithmetic power: I 10 nsec add and 13 7.5 nsec multiply. An Instruction Word Stack of 12 60-bit registers contained prefetched instructions for faster execution by avoiding frequent memory references. Capacity of the 7600 estimated at 3.3 CERN units. The 7600 had 15 Peripheral Processors. It was run under the 7600 Scope 2 operating system. Front-ended initially by a 6400 and a 6500 and at the end by a Cyber 170/720 + 730 pair running NOS/BE, and sharing a 512 kwords ECS (Extended Core Storage).

Like the 6600, it suffered from teething problems (hardware and software) during the first two to three years. The mean time between failures requiring a deadstart of the combined 7600 + 6400 system was 3 hours in 1973, 7 hours in 1974 and still only 9 hours in 1975 three years after delivery. A more acceptable 16.3 hours MTBF level was reached in 1977 and the machine continued to improve its reliability at the price, however, of considerable effort. The 7600 has been operated for 12.5 years. During its exceptionally long life it has processed a total of 6 681 378 jobs and delivered to its users 61321 CPU hours (over 6007o of total time)!

IBM 370/168-3 [1976-1982]

The 168 was delivered with 4 MBytes Oater expanded to 5) of semiconductor memory, a 16 kBytes cache, 4 channels (later 7) and a high-speed multiply unit. The CPU cycle was 80 nsec. It was a virtual memory machine. Although it became 'the CERN unit' of physics data processing power, its strengths relative to the CDC 7600 varied strongly, being e.g. weaker in floating point calculations

and stronger when large memory was needed. The hardware reliability, including tape (9 tracks, 1600 bpi) and disk units, was a significant improvement on previous systems for HEP computing.

It ran the MVS (Multiple Virtual Storage) operating system with the JES2 Job Entry subsystem for batch work, and the WYLBUR/MILTEN terminal system for an eventual maximum of 200 concurrent users editing on line-mode terminals. The exceptionally well-designed and friendly user interface of WYLBUR made it a firm favorite with the users, despite its inherent limitations of having neither full-screen working, nor allowing fully interactive computing.

IBM 3032 [1978-1981]

The 3032 was a re-packaged 370/168, lacking only the high-speed multiply unit. It was delivered with 6 MBytes of main memory, and 6 channels which were, in contrast to the 168, incorporated in the mainframe. Its performance was slightly weaker (1001o) than the 168 for typical CERN production work, but its higher memory and I/0 capacity allowed it to be put to good use as a 'front-end' machine. The system software was the same combination of MVS/JES2/WYLBUR run on the 168.

IBM 3081 [1981-1985]

The 3081-D replaced the 3032 in September 1981. It came with 16 MBytes of main memory, 16 channels and 32 KBytes of cache memory per processor. This was the first IBM 'dyadic' machine, having two 'tightly coupled' processors, which, unlike the previous MP systems, could not be separated into two independent systems. It was also the first IBM machine to use the 'Thermal Conduction Module' (TCM) packaging, also used later in the 3090 family of machines. The CPU was built using TTL chips.

The installation of the 3081 came as a great relief to the previous 168 + 3032 system which was saturated to the point of serious performance degradation. The CERN benchmarks rated each of the processors of this D-model as 1.9 CERN units. The machine was upgraded to a model K in 1982, giving each CPU a power of 2.4 CERN units. The 3081 was subsequently upgraded to 24 MBYtes of main memory and 24 channels. The performance was further increased by 10070 in 1984. Finally, it was sold in December 1985, and was replaced by an IBM 3090-200.

Siemens 7880 [1982-1985]

Made by Fujitsu as the M200, using technology developed together with Amdahl, and sold in Europe by Siemens, this IBM compatible machine had 12 MBytes of memory and 16 channels. The single CPU had a power of 2.4 CERN units. The 7880 was acquired to replace the IBM 168. It ran the MVS/JES2/WYLBUR system together with the IBM 3081 in a manner transparent to the users.

CDC Cyber 170/875 + 835 [1983-1986]

Acquired to replace the 7600 system these computers had respectively two and one CPU while each had I Megaword of memory (75 nsec memory cycle). The 875 had a CPU clock cycle time of 25 usec. They shared disks and tape units and ran under the CDC NOS/BE operating system. Compared to the 7600 + 720 + 730 complex running under Scope 2.0, this system was rated at least twice as powerful, each 875 CPU providing 3.5 CERN units and the 835 adding another 0.6 units. Compared to the IBM and Siemens/Fujitsu machines, the reliability of this system was somewhat disappointing and in 1984 CDC had to take some corrective measures in order to improve the service. Both machines were decommissioned in October 1986 and their departure marked the end of a 22 year long collaboration with Control Data.



For matters related to this article please contact the author.
Cnl.Editor@cern.ch


CERN-CNL-2001-003
Vol. XXXVI, issue no 3


Last Updated on Fri Dec 07 14:18:27 CET 2001.
Copyright © CERN 2001 -- European Organization for Nuclear Research