TWiki
>
ItCommTeam Web
>
ComputerCentreVisit
(2006-08-10,
AndrasHorvath
)
(raw view)
E
dit
A
ttach
P
DF
---+ Computer Centre visit Note: ask for *Computer Centre* access in addition to access to B.513 in EDH to be able to enter the room! Two groups (one downstairs, one upstairs) maximum. Please read through all of this page. ---++ Key messages * don't drink the tap water * don't touch any computers, cables or switches * visit http://gridcafe.org ---++ Floor plan Insert Picture Here ---++ Stops ---+++ Openlab This is the [[http://cern.ch/openlab][Openlab]] area with their stuff. ---+++ CiXP The CERN Internet Exchange Point is famous for its history. Also for our [[http://lsr.internet2.edu][Internet2 Land Speed Records]]. ---+++ Batch machines Some 3500 batch nodes in total - NOT a supercomputer. Dual-CPU, mostly Intel, recently we allow AMD as well; PCs are cheaper than mainframes/supercomputers and the physics events, being independent from each other, are easily distributed among independent CPUs. Single disk, no redundancy, 1-2GiB RAM. Most of the data processing will be done on the [[http://gridcafe.org][Grid]]. ---+++ Elevator Note the wooden floor and the doors that tend to be stuck when the elevator is full of visitors. If that happens, take the stairs instead. ---+++ Tape storage Check the A4 papers Charles stuck on the tape robots, they contain information. Preferably check that in advance. Tape is NOT backup (that's a very small part of it) but the primary data storage for the LHC. Each tape drive is connected to a 'tape server' (a redundant PC running Linux) via Fibre Channel. There are more similar robots in b.613 (which cannot be visited). There's no 'backup' of the physics data but each experiment will have their copies as well as the Tier1 centers. ---+++ Disk storage (disk servers) These are also PCs, running Linux. Disk servers are used as a buffer to tapes (both for reading and writing). The file access process (copy from tape to disk, or archiving from disk) is transparent to the users, using software from [[http://cern.ch/castor][CASTOR]]. These are redundant machines (ECC RAM, RAID disks, n+1 power supplies). ---++ Other (or FAQ) * upstairs room ("Computer centre") is 1450 sq.m, downstairs (called "Vault", because it used to be a tape vault) is 1200 sq.m. * cooling capacities are 2MW and 500kW respectively and this is what limits the capacity we can put in there (also the maximum amount of power consumed by machines!) * two electrical inputs (one Swiss, one French). We automatically switch from one to another in case of a failure. 100% of the surface of the computer centre is covered by UPS capacity, but only the critical areas (strip at back of upstairs and at far right downstairs as you enter from the lift) are backed by diesel generators. So if the Swiss/French autotransfer mechanism fails, physics services die in 10 minutes max. * At present we have one UPS system for physics with 4 400kVA modules, so 1,200kVA given the N+1 redundancy configuration. We will be installing more to get 3.6 MVA. In addition, we have 2 300kVA units to support critical services. Again, this is an N+1 redundancy configuration, so capacity is 300kVA. * cooling is an issue when running from UPS. Part of the cooling can run from UPS. * there's no automatic fire extinguisher mechanism; however there's fire detection and we have manual extinguishers. The room is simply too big for the former to be effective (we're on a budget). * most machines are running Linux :) except for mail and web services using Windows and some special services using Solaris or OpenBSD. Some (more than half but not all) desktops are also running Windows, the rest Linux. That would be [[http://cern.ch/scientific][Scientific Linux]], a recompiled version of Red Hat with some additions if anyone is interested. -- Main.AndrasHorvath - 10 Aug 2006
E
dit
|
A
ttach
|
Watch
|
P
rint version
|
H
istory
: r1
|
B
acklinks
|
V
iew topic
|
WYSIWYG
|
M
ore topic actions
Topic revision: r1 - 2006-08-10
-
AndrasHorvath
Log In
ItCommTeam
ItCommTeam Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
Public webs
Public webs
ABATBEA
ACPP
ADCgroup
AEGIS
AfricaMap
AgileInfrastructure
ALICE
AliceEbyE
AliceSPD
AliceSSD
AliceTOF
AliFemto
ALPHA
Altair
ArdaGrid
ASACUSA
AthenaFCalTBAna
Atlas
AtlasLBNL
AXIALPET
CAE
CALICE
CDS
CENF
CERNSearch
CLIC
Cloud
CloudServices
CMS
Controls
CTA
CvmFS
DB
DefaultWeb
DESgroup
DPHEP
DM-LHC
DSSGroup
EGEE
EgeePtf
ELFms
EMI
ETICS
FIOgroup
FlukaTeam
Frontier
Gaudi
GeneratorServices
GuidesInfo
HardwareLabs
HCC
HEPIX
ILCBDSColl
ILCTPC
IMWG
Inspire
IPv6
IT
ItCommTeam
ITCoord
ITdeptTechForum
ITDRP
ITGT
ITSDC
LAr
LCG
LCGAAWorkbook
Leade
LHCAccess
LHCAtHome
LHCb
LHCgas
LHCONE
LHCOPN
LinuxSupport
Main
Medipix
Messaging
MPGD
NA49
NA61
NA62
NTOF
Openlab
PDBService
Persistency
PESgroup
Plugins
PSAccess
PSBUpgrade
R2Eproject
RCTF
RD42
RFCond12
RFLowLevel
ROXIE
Sandbox
SocialActivities
SPI
SRMDev
SSM
Student
SuperComputing
Support
SwfCatalogue
TMVA
TOTEM
TWiki
UNOSAT
Virtualization
VOBox
WITCH
XTCA
Cern Search
TWiki Search
Google Search
ItCommTeam
All webs
Copyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use
Discourse
or
Send feedback