CERN Accelerating science

This website is no longer maintained. Its content may be obsolete. Please visit http://home.cern/ for current CERN information.

CERN home pageCERN home pageDocuments by ReferenceDocuments by ReferenceCNLsCNLsYear 2001Year 2001Help, Info about this page

Contents

Editorial Information
Editorial
If you need help
Announcements Physics Computing Desktop Computing Internet Services and Network Scientific Applications and Software Engineering Desktop Publishing The Learning Zone User Documentation Just For Fun ...
Previous:Nostalgia from the Computer Centre
Next:35 Years Ago - the Control Data 6600
 (See printing version)



Once Upon a Time... The Mainframe Era

Eric Mcintosh , IT/API


Abstract

A light hearted look back to the mainframe era.


I came to CERN in 1964 as an employee of CDC, more or less packed in a box with the CDC 6600. This was the Serial Number 3 machine, literally the the third system to be manufactured. Serial Number 1 was installed at the Livermore Laboratory and Serial Number 2 was used for software development until it caught fire.

The 6600 was intended to replace the IBM 7090 and its 1401 front end system. There was no mass storage at that time, no disks or drums, just magnetic tapes, cards and paper tape. Jobs and programs were on punched cards which were copied to magnetic tape on the 1401. The tape was read by the 7090 and the output also written to tape for subsequent transfer to the 1401 for printing. With the long queues and this lengthy procedure, desk checking was worthwhile. I personally found the punched card system a big improvement over the paper tape system I had been using in the UK; punched cards were considered too expensive there, as were flexowriters, so I was a real expert at decoding the punched holes on the paper tapes.

In order to facilitate the migration from IBM to CDC Drs Erskine and Lipps implemented a 7090 simulator for the CDC 6600. Since there was no CDC 6600 available, we tried to test it on a CDC 1604 system in Zurich which had a 6600 simulator. This project was abandoned after finding that it took about an hour to simulate one IBM instruction in this simulated simulated environment!

The 6600 had no software but CDC were implementing a new super operating system in California..........it was never finished. In fact the Chippewa (after Chippewa Falls, the home town of Seymour Cray) Operating System or COS was finally used by almost everyone. First it was translated from octal into assembler and was the basis of further OS development leading to the NOS and NOS/BE systems. Every Sunday evening we would test a new version of whatever OS we were using; it consisted of a couple of trays of blue binary cards with a few extra holes punched or covered up, as last minute patches. Every Monday morning we would introduce the new version at 08:00 and withdraw it at 08:30 because of user complaints.

A well known physicist, later to become a Nobel prize winner and DG of CERN, seemed to always be the first to discover these problems; perhaps because his jobs were somehow always at the front of the queue. The Computer Coordinator had a collection of red-rimmed priority cards, and when users were particularly upset or had some rush work, he would kindly dish out a few of these to the person concerned.

For the next 20 years or so, the mainframe era, machines were expensive. A single processor system was of the order of 20 million currency units. The power of a single processor increasing from about 0.5 of a CERN Unit for the CDC 6600, to 1 Unit for the IBM 370/168, and to about 8 Units for the Cray X-MP, and to 20 for the last IBM mainframe at CERN. Today the dual processor 800MHz PC we purchase for a few thousand currency units, has some 180 CERN Units per processor.

On the other hand, people have now become relatively expensive. At one time DD had over thirty people working on basic operating systems. I spent major amounts of time, as did many of my colleagues in CDC, IBM and CERN, analysing the dead-start dumps (or IBM equivalents) described elsewhere by Julian Blake. Tensions were high with a 6600 mean time to failure of about two hours in 1965. I remember having to separate a CERN staff and an outsourced software manager as they were coming to blows over responsibility for the latest crash. My career at CERN almost ended before it started as I wrote a CDC 6600 peripheral processor program for a physicist, who also remarkably was awarded a Nobel Prize many years later, and this was counter to the DD policy of the time.

Still much progress was made in terms of functionality and reliability over the years; the advent of disk storage, and (semi-)permanent files made everyone's life easier. The introduction of a general interactive service (INTERCOM or Wylbur) made everyone much more productive, although painfully exposing the users to every system hiccup. Reliability remains a major issue today; even if each PC has a mean time to failure several orders of magnitude better than the original mainframes. When clusters of thousands of processors will be used by each LHC experiment, applications and systems will need to be designed to have fault tolerance in one way or another.

Throughout this era, Fortran was THE programming language of choice; from Fortran IV, through Fortran 66, VM Fortran, to Fortran 77. Language standards were a great help, even if CERN's attempt to define a standard CERN Fortran failed, but of course every manufacturer had, and still has, proprietary extensions. It will be interesting to see if Fortran continues to play a major role, at least for engineering applications, as all CERN developed physics applications move to C++, Java, or C#?

If I had to summarise a few things I have learned in these very exciting and happy years I would say the following:

  • There is always an explanation for a problem; it is just becoming too expensive in human resources to identify it and correct it.
  • Every application should be ported to at least one computing platform other than that on which it was developed. All code should checked against the relevant standard as well, and checked by all available tools for finding memory leaks, mismatched procedure references, and array limits being exceeded. Given the long lifetime of an application, and the rapid evolution of computing technology, both hardware and software, this effort will be well paid in the long run. (Witness the current difficulties in porting LEP applications to PCs.)
Finally, monopolies are bad. Healthy competition is an absolute must in order to meet the LHC computing requirements at minimum cost. Who knows if a significant amount of LHC computing might not be done on Z-boxes or Playstation 5's, or whatever other commodity system is installed in every home.


For matters related to this article please contact the author.
Cnl.Editor@cern.ch


CERN-CNL-2001-001
Vol. XXXVI, issue no 1


Last Updated on Thu Apr 05 15:28:10 CEST 2001.
Copyright © CERN 2001 -- European Organization for Nuclear Research