Quick Links
v. 1.0.0-1

Generic Installation & Configuration for EMI 2

This document is addressed to Site Administrators responsible for middleware installation and configuration. It is a generic guide to manual installation and configuration for EMI products.

The list of supported products and services can be found in the EMI 2 web pages.:

When installing a particular product please also have a look at the specific release page to get specific installation & configuration information.

Installing the Operating System

All EMI 2 components are fully supported on the SL5/x86_64 & SL6/x86_64 platforms with EPEL as repository for external components.

Full platform support means the component is distributed from the EMI repository using certified source and binary packages according to the format specification of the platform. A subset of services is also available for Debian 6 64bit.

Scientific Linux 5 & 6

For more information on Scientific Linux please check: http://www.scientificlinux.org

All the information to install this operating system can be found at https://www.scientificlinux.org/download

Example of sl5.repo file:

[core]
name=name=SL 5 base
baseurl=http://linuxsoft.cern.ch/scientific/5x/$basearch/SL
   http://ftp.scientificlinux.org/linux/scientific/5x/$basearch/SL
        http://ftp1.scientificlinux.org/linux/scientific/5x/$basearch/SL
        http://ftp2.scientificlinux.org/linux/scientific/5x/$basearch/SL
protect=0

Example of sl6.repo file:

[core]
name=name=SL 6 base
baseurl=http://linuxsoft.cern.ch/scientific/6x/$basearch/SL
        http://ftp.scientificlinux.org/linux/scientific/6x/$basearch/SL
protect=0

Debian 6

For more information on Debian please check http://www.debian.org/.

All the information to install this operating system can be found at http://www.debian.org/releases/stable/installmanual

Example of deb.list file:

deb http://ftp.it.debian.org/debian/ squeeze main contrib non-free
deb-src http://ftp.it.debian.org/debian/ squeeze main contrib non-free

deb http://security.debian.org/ squeeze/updates main contrib
deb-src http://security.debian.org/ squeeze/updates main contrib

Node synchronization, NTP installation and configuration

A general requirement is that the nodes are synchronized. This requirement may be fulfilled in several ways. If your nodes run under AFS they are most likely already synchronized. Otherwise, you can use the NTP protocol with a time server.

Instructions and examples for a NTP client configuration are provided in this section. If you are not planning to use a time server on your machine you can just skip this section.

Use the latest ntp version available for your system. If you are using APT, an apt-get install ntp will do the work.

  • Configure the file /etc/ntp.conf by adding the lines dealing with your time server configuration such as, for instance:
       restrict <time_server_IP_address> mask 255.255.255.255 nomodify notrap noquery
       server <time_server_name>
Additional time servers can be added for better performance results. For each server, the hostname and IP address are required. Then, for each time-server you are using, add a couple of lines similar to the ones shown above into the file /etc/ntp.conf.

  • Edit the file /etc/ntp/step-tickers adding a list of your time server(s) hostname(s), as in the following example:
      137.138.16.69
      137.138.17.69

  • If you are running a kernel firewall, you will have to allow inbound communication on the NTP port. If you are using iptables, you can add the following to /etc/sysconfig/iptables
      -A INPUT -s NTP-serverIP-1 -p udp --dport 123 -j ACCEPT 
      -A INPUT -s NTP-serverIP-2 -p udp --dport 123 -j ACCEPT
Remember that, in the provided examples, rules are parsed in order, so ensure that there are no matching REJECT lines preceding those that you add. You can then reload the firewall
     # /etc/init.d/iptables restart
  • Activate the ntpd service with the following commands:
      # ntpdate <your ntp server name>
      # service ntpd start
      # chkconfig ntpd on
  • You can check ntpd's status by running the following command
      # ntpq -p

Cron and logrotate

Many middleware components rely on the presence of cron (including support for /etc/cron.* directories) and logrotate. You should make sure these utils are available on your system.

Host Certificates

All nodes except UI, WN and BDII require the host certificate/key files to be installed. Contact your Certification Authority (CA) to understand how to obtain a host certificate if you do not have one already.

Once you have obtained a valid certificate:

  • hostcert.pem - containing the machine public key
  • hostkey.pem - containing the machine private key
make sure to place the two files in the target node into the /etc/grid-security directory and check the access right for hostkey.pem is only readable by root and that the public key, hostcert.pem, is readable by everybody.

Installing the Middleware

For SL5 & SL6 the YUM package manager is considered the to be the default installation tool. FOr Debian , apt

Repositories

For a successful installation, you will need to configure your package manager to reference a number of repositories (in addition to your OS);

The Certification Authority repository

All the details on how to install the CAs can be found in EGI IGTF release pages (https://wiki.egi.eu/wiki/EGI_IGTF_Release). It contain information about how to configure YUM & APT managers for downloading and installing the trust anchors ("Certification Authorities" or "CAs") that all sites should install.

NOTE: BDII site and top services do not need, for the moment,the installation of the CAs.

The EPEL repository

If not present by default on your nodes, you should enable the EPEL repository (https://fedoraproject.org/wiki/EPEL)

EPEL has an 'epel-release' package that includes gpg keys for package signing and repository information. Installing the latest version of epel-release package available on EPEL5 and EPEL6 repositories like:

or

should allow you to use normal tools, such as yum, to install packages and their dependencies. By default the stable EPEL repo is enabled.

Example of epel.repo file:

[extras]
name=epel
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-5&arch=$basearch
protect=0

or

[extras]
name=epel
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-6&arch=$basearch
protect=0

The middleware (EMI) repositories

All EMI products are distributed from a single repository (http://emisoft.web.cern.ch/emisoft) having the following structure:

  • EMI-production (stable), EMI/{1,2,3}:
    • stable and signed, well tested software components, recommended to be installed on production-sites;
  • deployment/{1,2,3}:
    • signed packages that will become part of the next stable distribution; passed the certification and validation phase and are available for technical-previews
  • testing/{1,2,3}:
    • unsigned packages that will become part of the next stable distribution; in the certification stage, available for technical preview

The packages are signed with the EMI gpg key, that can be downloaded from http://emisoft.web.cern.ch/emisoft/dist/EMI/2/RPM-GPG-KEY-emi. Please import the key BEFORE starting!

The fingerprint of the key is:

pub   1024D/DF9E12EF 2011-05-04
      Key fingerprint = AC82 01B1 DD50 6F4D 649E  DFFC 27B3 331E DF9E 12EF
uid                  Doina Cristina Aiftimiei (EMI Release Manager) <aiftim@pd.infn.it>
sub   2048g/C1E57858 2011-05-04

  • for SL5/SL6 save the key under /etc/pki/rpm-gpg/
# rpm --import http://emisoft.web.cern.ch/emisoft/dist/EMI/2/RPM-GPG-KEY-emi

  • for Debian:
# wget -q   -O - http://emisoft.web.cern.ch/emisoft/dist/EMI/2/RPM-GPG-KEY-emi | sudo apt-key add -

Giving EMI repositories precedence over EPEL

It is strongly recommended that EMI repositories take precedence over EPEL when installing and upgrading packages.

For manual configuration:

  • you must install the yum-priorities plugin and ensure that its configuration file, /etc/yum/pluginconf.d/priorities.conf
is as follows:
[main]
enabled = 1
check_obsoletes = 1

For automatic configuration:

  • we strongly recommend the use of emi-release package. Please follow the instructions given bellow on what version of the package, how to get it and install according to your deployment scenario (upgrade or fresh instalaltion)

Configuring the use of EMI 2 repositories

(*) - please add the option "--nogpgcheck" if you didn't download first the key.

These packages will install required dependencies, the EMI public key and ensures the precedence of EMI repositories over EPEL and Debian.

Important note on automatic updates

Several site use auto update mechanism. Sometimes middleware updates require non-trivial configuration changes or a reconfiguration of the service. This could involve service restarts, new configuration files, etc, which makes it difficult to ensure that automatic updates will not break a service. Thus

WE STRONGLY RECOMMEND NOT TO USE AUTOMATIC UPDATE PROCEDURE OF ANY KIND

on the EMI middleware repositories (you can keep it turned on for the OS). You should read the update information provides by each service and do the upgrade manually when an update has been released!

Installations

You need to have enabled only the above repositories (Operating System, EPEL, Certification Authority, EMI).

Example of a general installation of a product / service:

  • SL5/SL6:
# yum update
# yum install ca-policy-egi-core
# yum install <meta-package/package name>

  • Debian6:
# apt-get update
# apt-get install ca-policy-egi-core
# apt-get install <meta-package/package name>

NOTE: it happened that on other operating systems than SL5/x86_64, as for example CentOS, for certain node-types you have to install first the jdk (SunJdk) package. Please refer to your Operating System documentation to learn how to do this.

The table below lists the available EMI's meta-packages and packages:

Node Type /
Product Name
meta-package name Comments
SL5/SL6 Debian
AMGA_postgresql emi-amga-postgresql -  
APEL publisher emi-apel -  
ARC-CE nordugrid-arc-compute-element -  
ARC core nordugrid-arc
nordugrid-arc-doc
nordugrid-arc-ca-utils
nordugrid-arc-debuginfo
nordugrid-arc-devel
nordugrid-arc-doxygen
nordugrid-arc-hed
nordugrid-arc-java
nordugrid-arc-python
nordugrid-arc-python26
nordugrid-arc-plugins-needed
nordugrid-arc-plugins-globus
-  
ARC Clients nordugrid-arc-client-tools -  
ARC gridftp nordugrid-arc-gridftpd -  
ARC InfoSys nordugrid-arc-information-index -  
ARGUS emi-argus -  
BDII_site emi-bdii-site -  
BDII_top emi-bdii-top -  
CANL canl-c
canl-c-debuginfo
canl-c-devel
canl-c-examples
canl-java
canl-java-javadoc
canl-c-dbg
libcanl-c-dev
libcanl-c-examples
libcanl-c1
libcanl-java
libcanl-java-doc
Common AuthenticatioN Library - set of libraries
CLUSTER emi-cluster -  
CREAM emi-cream-ce -  
CREAM LSF module emi-lsf-utils -  
CREAM TORQUE module emi-torque-utils -  
dCache dcache-server -  
DPM mysql emi-dpm_mysql -  
DPM disk emi-dpm_disk -  
EMIR server: emi-emir
client: emird
-  
FTS oracle emi-fts_oracle, emi-fta_oracle -  
GLEXEC_wn emi-glexec_wn -  
LB emi-lb -  
LFC mysql emi-lfc_mysql -  
LFC oracle emi-lfc_oracle -  
MPI_utils emi-mpi -  
Nagios emi-nagios -  
Pseudonimity pseudonymity-server
pseudonymity-ui
-  
PX (MyProxy) emi-px -  
STORM_backend emi-storm-backend-mp -  
STORM_frontend emi-storm-frontend-mp -  
STORM_checksum emi-storm-checksum-mp -  
STORM_gridhttps emi-storm-gridhttps-mp -  
STORM_globus_gridftp emi-storm-globus-gridftp-mp -  
STORM_srm_client emi-storm-srm-client-mp -  
TORQUE WN config emi-torque-client -  
TORQUE server config emi-torque-server -  
User Interface emi-ui -  
UNICORE/X unicore-unicorex6 -  
UNICORE-UCC6 unicore-ucc6 -  
UNICORE Gateway6 unicore-gateway6 -  
UNICORE-HILA unicore-hila-emi-es
unicore-hila-gridftp
unicore-hila-shell
unicore-hila-unicore6
-  
UNICORE Registry6 unicore-registry6 -  
UNICORE TSI6 unicore-tsi6 -  
UNICORE XUUDB unicore-xuudb -  
UNICORE UVOS unicore-uvos-clc
unicore-uvos-server
unicore-uvos-webapp
unicore-uvos-webauth
-  
VOMS_mysql emi-voms-mysql -  
VOMS_oracle emi-voms-oracle -  
WMS emi-wms -  
WNODES wnodes_bait
wnodes_hypervisor
wnodes_manager
wnodes_nameserver
wnodes_site_specific
wnodes_utils
-  
Worker Node emi-wn -  

Configuring the Middleware

Using the YAIM configuration tool

Some of EMI services can be configured using the YAIM tool. For a detailed description on how to configure the middleware with YAIM, please check the individual products/services guides and the YAIM Guide:

The YAIM-modules needed to configure a certain service/product are automatically installed with the middleware.

However, if you want to install YAIM packages separately, you can install them by running yum install glite-yaim-<node-type>. This will automatically install the YAIM module you are interested in together with yaim-core, which contains the core functions and utilities used by all the YAIM modules..

Configuration information

The table bellow lists the configuration instructions for some of EMI services:

Node Type/Service Comments
AMGA_postgresql yaim configuration target "AMGA_postgresql"
https://twiki.cern.ch/twiki/pub/EMI/AMGA/amga-manual_2_3_0.pdf
APEL publisher yaim configuration target "APEL"
use https://twiki.cern.ch/twiki/pub/EMI/APELClient/Publisher_System_Administrator_Guide_v1.0.0.pdf
ARC-CE http://www.nordugrid.org/documents/arc-server-install.html
http://www.nordugrid.org/documents/arex_tech_doc.pdf
ARC Clients arc* tools
ARC Client Configuration
Section "Configuration"
ARC InfoSys http://www.nordugrid.org/documents/arc_infosys.pdf
ARGUS yaim config target "ARGUS_server"
https://twiki.cern.ch/twiki/bin/view/EGEE/ArgusEMIDeployment]
BDII_site yaim config target "BDII_site"
use yaim
BDII_top yaim config target "BDII_top"
use yaim
CLUSTER CLUSTER config
CREAM yaim config target "creamCE"
CREAM Configuration
CREAM LSF module yaim config target 'LSF_utils"
use yaim
DPM mysql yaim config target "emi_dpm_mysql"
use yaim
specific HEAD_node configuration
DPM disk yaim config target "emi_dpm_disk"
use yaim
specific DISK_node configuration
FTS oracle yaim config target "emi_fts2" "emi_fta2", "emi_ftm2"
Full YAIM reference for FTS 2.2.6
GLEXEC_wn yaim config target "GLEXEC_wn"
use yaim
The GLEXEC_wn should always be installed together with a WN.
LB yaim config target "LB"
use yaim
more info
LFC mysql yaim config target "emi_lfc_mysql"
use yaim
specific configuration
LFC oracle yaim config target "emi_lfc_oracle"
use yaim
specific configuration
MPI_utils for CE configuration see http://grid.ifca.es/wiki/Middleware/MpiStart/MpiUtils#CE_Configuration
for WN configuration see http://grid.ifca.es/wiki/Middleware/MpiStart/MpiUtils#WN_Configuration
PX (MyProxy) yaim config target "PX"
use yaim
STORM_backend yaim config target 'SE_storm_backend"
use yaim
STORM_frontend yaim config target 'SE_storm_frontend"
use yaim
STORM_checksum yaim config target 'SE_storm_checksum"
use yaim
STORM_gridhttps yaim config target 'SE_storm_gridhttps"
use yaim
STORM_globus_gridftp yaim config target 'SE_storm_globus_gridftp"
use yaim
STORM_srm_client  
TORQUE WN config yaim config target 'TORQUE_client"
use yaim
TORQUE server config yaim config target "TORQUE_server"
use yaim
CREAM TORQUE module yaim config target "TORQUE_utils"
use yaim
UI yaim config target "UI"
see details bellow
UNICORE/X  
UNICORE-UCC  
UNICORE Gateway  
UNICORE-HILA  
UNICORE Registry  
UNICORE TSI  
UNICORE XUUDB  
UNICORE UVOS  
VOMS_mysql yaim config target 'VOMS_mysql"
use yaim
more information
VOMS_oracle yaim config target 'VOMS_oracle"
use yaim
more information
WMS yaim config target 'WMS"
use yaim
more details on WMS config file
WN yaim config target 'WN"
see details bellow for configuring them for different batch systems

The LSF batch system

You have to make sure that the necessary packages for submitting jobs to your LSF batch system are installed on your CE. By default, the packages come as tar balls. At CERN they are converted into rpms so that they can be automatically rolled out and installed in a clean way (in this case using Quattor).

Since LSF is a commercial software it is not distributed together with the gLite middleware. Visit the Platform's LSF home page for further information. You'll also need to buy an appropriate number of license keys before you can use the product.

The documentation for LSF is available on Platform Manuals web page. You have to register in order to be able to access it.

The CREAM for LSF

The WN for LSF

Apart from the LSF specific configurations settings there is nothing special to do on the worker nodes. \After installing:

# yum install emi-wn
# /opt/glite/yaim/bin/yaim -c -s site-info.def -n WN

just use the plain WN configuration target.

/opt/glite/yaim/bin/yaim -c -s site-info.def -n WN

Note on site-BDII for LSF

When you configure your site-BDII you have to populate the [vomap] section of the /etc/lcg-info-dynamic-scheduler.conf file yourself. This is because LSF's internal group mapping is hard to figure out from yaim, and to be on the safe side the site admin has to crosscheck. Yaim configures the lcg-info-dynamic-scheduler in order to use the LSF info provider plugin which comes with meaningful default values. If you would like to change it edit the /etc/glite-info-dynamic-lsf.conf file. After YAIM configuration you have to list the LSF group - VOMS FQAN - mappings in the [vomap] section of the /etc/lcg-info-dynamic-scheduler.conf file.

As an example you see here an extract from CERN's config file:

vomap :
   grid_ATLAS:atlas
   grid_ATLASSGM:/atlas/Role=lcgadmin
   grid_ATLASPRD:/atlas/Role=production
   grid_ALICE:alice
   grid_ALICESGM:/alice/Role=lcgadmin
   grid_ALICEPRD:/alice/Role=production
   grid_CMS:cms
   grid_CMSSGM:/cms/Role=lcgadmin
   grid_CMSPRD:/cms/Role=production
   grid_LHCB:lhcb
   grid_LHCBSGM:/lhcb/Role=lcgadmin
   grid_LHCBPRD:/lhcb/Role=production
   grid_GEAR:gear
   grid_GEARSGM:/gear/Role=lcgadmin
   grid_GEANT4:geant4
   grid_GEANT4SGM:/geant4/Role=lcgadmin
   grid_UNOSAT:unosat
   grid_UNOSAT:/unosat/Role=lcgadmin
   grid_SIXT:sixt
   grid_SIXTSGM:/sixt/Role=lcgadmin
   grid_EELA:eela
   grid_EELASGM:/eela/Role=lcgadmin
   grid_DTEAM:dteam
   grid_DTEAMSGM:/dteam/Role=lcgadmin
   grid_DTEAMPRD:/dteam/Role=production
   grid_OPS:ops
   grid_OPSSGM:/ops/Role=lcgadmin
module_search_path : ../lrms:../ett

The Torque/PBS batch system

TORQUE Server

  • if you want to have a dedicated node for the TORQUE server:
# yum install emi-torque-server emi-torque-utils
# /opt/glite/yaim/bin/yaim -c -s site-info.def -n TORQUE_server -n TORQUE_utils
  • if you want to install configure the TORQUE server on the same node as the CREAM Computing Element:
# yum install emi-cream-ce emi-torque-server emi-torque-utils
# /opt/glite/yaim/bin/yaim -c -s site-info.def -n creamCE -n TORQUE_server -n TORQUE_utils

For more details see the "CREAM System Administrator Guide": http://wiki.italiangrid.org/twiki/bin/view/CREAM/SystemAdministratorGuideForEMI2

The WN for Torque/PBS

# yum install emi-wn emi-torque-client
# /opt/glite/yaim/bin/yaim -c -s site-info.def -n WN -n TORQUE_client

The UI

# yum install emi-ui
# /opt/glite/yaim/bin/yaim -c -s site-info.def -n UI

-- DoinaCristinaAiftimiei - 03-May-2012

Edit | Attach | Watch | Print version | History: r20 < r19 < r18 < r17 < r16 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r20 - 2013-03-12 - EmidloGiorgioExCern
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    EMI All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback