YAIM 3.0.1-x, guide for sysadmins
IMPORTANT NOTE: This guide is not up to date. Better use the YAIM guide 3.1.1 with the latest information
Introduction
This document provides a description for YAIM version 3.0.1-x. Release dependent information is mention in-line. As the structure and packaging of YAIM is going to change smoothly in the near future, there are some functionality which is just partly implemented, consult to the "Known issues" section for details.
This document describes the idea and working principles of YAIM and lists the changes with respect to the latest 3.0.0-x version.
For the installation process in general consult the
Generic Installation and Configuration Guide.
Basics
What is YAIM
The aim of YAIM (Yet Another Installation Manager) is to implement a relatively painless configuration method for the LCG and gLite software. In principle, YAIM is not more than, just a set of bash scripts and function. YAIM is distributed in rpm form, it usually resides in
/opt/glite/yaim
.
In order to configure a site one has to edit one or more configuration file and run some YAIM scripts. YAIM is a bunch of bash function, so in all the
configuration files one has to follow the bash syntax. For example no space between the equal sign and the key-value variables are allowed.
WRONG :
SITE_NAME = NorthPole
CORRECT:
SITE_NAME=NorthPole
A good syntax test for the
site-info.def
is to source it:
source ./site-info.def
and looks for errors. The configuration procedure is described in the following sections.
The configuration variables
The configuration is stored in a directory structure which will be extended in the near future. Currently the following files are used:
site-info.def
,
groups.conf
,
users.conf
, and the
vo.d
directory.
IMPORTANT: The configuration files which are coming with the YAIM rpm are just examples! Please review them and edit your own!
site-info.def
This is the main configuration file of YAIM. There are several different configuration variables to be defined here. A part of them is compulsory and no default value is assigned to them by the functions, while some of the other can be left undefined and the functions will use the default values. It is quite trivial which one should be defined, which one is optional. In the following table all the variables used by this version of YAIM are listed:
(
C
= compulsory (if you going to configure that type of node) ,
O
= optional )
(Click on the column name to sort according to than column.)
Variable name |
Type |
Description |
Used from ---------------------------------- |
APEL_DB_PASSWORD |
C |
Database password for APEL. |
(v >= 3.0.1-0) |
BATCH_BIN_DIR |
C |
The path of the lrms commands, e.g. /usr/pbs/bin . |
(v >= 3.0.1-0) |
BATCH_LOG_DIR |
C |
Your batch system's log directory. |
(v >= 3.0.1-0) |
BATCH_VERSION |
C |
The version of the Local Resource Managment System, e.g. OpenPBS_2.3. |
(v >= 3.0.1-0) |
BDII_FCR |
O |
Set the URL of the Freedom of Choice for Rescources URL. |
(v >= 3.0.1-0) |
BDII_HOST |
C |
BDII hostname. |
(v >= 3.0.1-0) |
BDII_HTTP_URL |
C |
URL pointing to the BDII configuration file (bdii-update.conf ). |
(v >= 3.0.1-0) |
BDII_REGIONS |
C |
List of node types publishing information to the BDII. For each item listed in the BDII_REGIONS variable you need to create a set of new variables as follows: |
(v >= 3.0.1-0) |
BDII_\<REGION\>_URL |
C |
URL of the information producer (e.g. BDII_CE_URL="URL of the CE information producer", BDII_SE_URL="URL of the SE information producer"). |
(v >= 3.0.1-0) |
CA_REPOSITORY |
C |
The repository with Certification Authorities (use the one in the example). |
(v >= 3.0.1-0) |
CE_BATCH_SYS |
C |
Implementation of site batch system. Available values are ``torque'', ``lsf'', ``pbs'', ``condor'' etc. |
(v >= 3.0.1-0) |
CE_CPU_MODEL |
C |
Model of the CPU used by the WN (WN specification). This parameter is a string whose domain is not defined yet in the GLUE Schema. The value used for Pentium III is "PIII". |
(v >= 3.0.1-0) |
CE_CPU_SPEED |
C |
Clock frequency in Mhz (WN specification). |
(v >= 3.0.1-0) |
CE_CPU_VENDOR |
C |
Vendor of the CPU. used by the WN (WN specification). This parameter is a string whose domain is not defined yet in the GLUE Schema. The value used for Intel is "intel". |
(v >= 3.0.1-0) |
CE_HOST |
C |
Computing Element Hostname. |
(v >= 3.0.1-0) |
CE_INBOUNDIP |
C |
TRUE if inbound connectivity is enabled at your site, FALSE otherwise (WN specification). |
(v >= 3.0.1-0) |
CE_MINPHYSMEM |
C |
RAM size (Mbytes) (per WN and not per CPU) (WN specification). |
(v >= 3.0.1-0) |
CE_MINVIRTMEM |
C |
Virtual Memory size in (Mbytes) (per WN and not per CPU) (WN specification). |
(v >= 3.0.1-0) |
CE_OS |
C |
Operating System name (WN specification) - see https://wiki.egi.eu/wiki/Operations/HOWTO05. |
(v >= 3.0.1-0) |
CE_OS_RELEASE |
C |
Operating System release (WN specification) - see https://wiki.egi.eu/wiki/Operations/HOWTO05. |
(v >= 3.0.1-0) |
CE_OS_VERSION |
C |
Operating System Version (WN specification) - see https://wiki.egi.eu/wiki/Operations/HOWTO05. |
(v >= 3.0.1-0) |
CE_OUTBOUNDIP |
C |
TRUE if outbound connectivity is enabled at your site, FALSE otherwise (WN specification). |
(v >= 3.0.1-0) |
CE_RUNTIMEENV |
C |
List of software tags supported by the site. The list can include VO-specific software tags. In order to assure backward compatibility it should include the entry 'LCG-2', the current middleware version and the list of previous middleware tags. |
(v >= 3.0.1-0) |
CE_SF00 |
C |
Performance index of your fabric in SpecFloat 2000 (WN specification). For some examples of Spec values see http://www.specbench.org/osg/cpu2000/results/cint2000.html. |
(v >= 3.0.1-0) |
CE_SI00 |
C |
Performance index of your fabric in SpecInt 2000 (WN specification). For some examples of Spec values see http://www.specbench.org/osg/cpu2000/results/cint2000.html. |
(v >= 3.0.1-0) |
CE_SMPSIZE |
C |
Number of cpus in an SMP box (WN specification). |
(v >= 3.0.1-0) |
CLASSIC_HOST |
C |
The name of your SE_classic host. |
(v >= 3.0.1-0) |
CLASSIC_STORAGE_DIR |
C |
The root storage directory on CLASSIC_HOST. |
(v >= 3.0.1-0) |
CRON_DIR |
C |
Yaim writes all cron jobs to this directory. Change it if you want to turn off Yaim's management of cron. |
(v >= 3.0.1-0) |
DCACHE_ADMIN |
C |
Host name of the server node which manages the pool of nodes. |
(v >= 3.0.1-0) |
DCACHE_POOLS |
C |
List of pool nodes managed by the DCACHE_ADMIN server node. |
(v >= 3.0.1-0) |
DCACHE_PORT_RANGE |
C |
dCache Port Range. This variable is optional and the default value is "20000,25000" . |
(v >= 3.0.1-0) |
DCACHE_DOOR_SRM |
O |
Set up srm server on dCache pool nodes (ex door_node1[:port] door_node2[:port] ) |
(v >= 3.0.1-0) |
DCACHE_DOOR_GSIFTP |
O |
Set up srm gsiftp on dCache pool nodes (ex door_node1[:port] door_node2[:port] ) |
(v >= 3.0.1-0) |
DCACHE_DOOR_GSIDCAP |
O |
Set up gsidcap server on dCache pool nodes (ex door_node1[:port] door_node2[:port] ) |
(v >= 3.0.1-0) |
DCACHE_DOOR_DCAP |
O |
Set up dcap server on dCache pool nodes (ex door_node1[:port] door_node2[:port] ) |
(v >= 3.0.1-0) |
DCACHE_DOOR_XROOTD |
O |
Set up xrootd server on dCache pool nodes (ex door_node1[:port] door_node2[:port] ) |
(v >= 3.0.1-0) |
DCACHE_DOOR_LDAP |
O |
Set up ldap server on dCache admin_node (ex door_node1[:port] door_node2[:port] ) |
(v >= 3.0.1-0) |
DPMDATA |
C |
Directory where the data is stored (absolute path, e.g. /storage). on a DPM node |
(v >= 3.0.1-0) |
DPMFSIZE |
C |
The default disk space allocated per file (ex. 200 MB) on a DPM node |
(v >= 3.0.1-0) |
DPM_DB_USER |
C |
The db user account for the DPM. |
(v >= 3.0.1-0) |
DPMPOOL |
C |
Name of the Pool (per default Permanent). |
(v >= 3.0.1-0) |
DPM_FILESYSTEMS |
C |
Space separated list of DPM pool hostname:/path entries". |
(v >= 3.0.1-0) |
DPM_DB_PASSWORD |
C |
Password of the db user account. |
(v >= 3.0.1-0) |
DPM_DB_HOST |
C |
Set this if your DPM server uses a db on a separate machine. Defaults to localhost. |
(v >= 3.0.1-0) |
DPM_HOST |
C |
Host name of the DPM host, used also as a default DPM for the lcg-stdout-mon . |
(v >= 3.0.1-0) |
DPM_DB |
C |
The dpm database name (default is dpm_db) |
(v >= 3.0.1-15) |
DPNS_DB |
C |
The cns database name (default is cns_db) |
(v >= 3.0.1-15) |
DPNS_BASEDIR |
O |
The DPNS server base dir. Default value is home , i.e. the DPNS server is serving the /dpm/domain/home name space. If you have multiple DPNS server under the same domain use for example home2 or data for this value. |
(v >= 3.0.1-17) |
DPM_INFO_USER |
C |
The DPM database info user. |
(v >= 3.0.1-16) |
DPM_INFO_PASS |
C |
The DPM database info user's password. |
(v >= 3.0.1-16) |
EDG_WL_SCRATCH |
O |
Optional scratch directory for jobs. |
(v >= 3.0.1-0) |
FTS_HOST |
C |
The hostname of your FTS server - use this only if installinf an FTS server. |
(v >= 3.0.1-0) |
FTS_SERVER_URL |
C |
The URL of the File Transfer Service server. |
(v >= 3.0.1-0) |
FUNCTIONS_DIR |
C |
The directory where YAIM will find its functions. |
(v >= 3.0.1-0) |
GLOBUS_TCP_PORT_RANGE |
C |
Port range for Globus IO. |
(v >= 3.0.1-0) |
GRIDICE_SERVER_HOST |
O |
GridIce server host name (usually run on the MON node). |
(v >= 3.0.1-0) |
GRIDMAP_AUTH |
C |
List of ldap servers in edg-mkgridmap.conf which authenticate users. |
(v >= 3.0.1-0) |
GRID_TRUSTED_BROKERS |
C |
List of the DNs of the Resource Brokers host certificates which are trusted by the Proxy node. (ex: /O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch) |
(v >= 3.0.1-0) |
GROUPS_CONF |
C |
Path to the groups.conf file which contains information on mapping VOMS groups and roles to local groups. An example of this configuration file is given in /opt/lcg/yaim/examples/groups.conf . |
(v >= 3.0.1-0) |
GSSKLOG |
C |
yes or no , indicating whether the site provides an AFS authentication server which maps gsi credentials into Kerberos tokens . |
(v >= 3.0.1-0) |
GSSKLOG_SERVER |
C |
If GSSKLOG is yes, the name of the AFS authentication server host. |
(v >= 3.0.1-0) |
INSTALL_ROOT |
C |
Installation root - change if using the re-locatable distribution. |
(v >= 3.0.1-0) |
JAVA_LOCATION |
C |
Path to Java VM installation. It can be used in order to run a different version of java installed locally. |
(v >= 3.0.1-0) |
JOB_MANAGER |
C |
The name of the job manager used by the gatekeeper. |
(v >= 3.0.1-0) |
LCG_REPOSITORY |
C |
APT repository with LCG middleware (use the one in the example). |
(v >= 3.0.1-0) |
LFC_CENTRAL |
C |
A list of VOs for which the LFC should be configured as a central catalogue. |
(v >= 3.0.1-0) |
LFC_DB_PASSWORD |
C |
db password for LFC user. |
(v >= 3.0.1-0) |
LFC_HOST |
C |
Set this if you are building an LFC_HOST, not if you're just using clients. |
(v >= 3.0.1-0) |
LFC_LOCAL |
C |
Normally the LFC will support all VOs in the VOS variable. If you want to limit this list, add the ones you need to LFC_LOCAL. |
(v >= 3.0.1-0) |
|
|
--- For each item listed in the VOS variable you need to create a set of new variables as follows: |
(v >= 3.0.1-0) |
VO_$<$VO-NAME$>$_SE |
C |
Default SE used by the VO. WARNING: VO-NAME must be in capital cases. |
(v >= 3.0.1-0) |
VO_$<$VO-NAME$>$_SGM |
C |
ldap directory with VO software managers list. WARNING: VO-NAME must be in capital cases. |
(v >= 3.0.1-0) |
VO_$<$VO-NAME$>$_STORAGE_DIR |
C |
Path to the storage area for the VO on an SE_classic. WARNING: VO-NAME must be in capital cases. |
(v >= 3.0.1-0) |
VO_$<$VO-NAME$>$_SW_DIR |
C |
Area on the WN for the installation of the experiment software. If on the WNs a predefined shared area has been mounted where VO managers can pre-install software, then these variable should point to this area. If instead there is not a shared area and each job must install the software, then this variables should contain a dot ( . ). Anyway the mounting of shared areas, as well as the local installation of VO software is not managed by yaim and should be handled locally by Site Administrators. WARNING: VO-NAME must be in capital cases. |
(v >= 3.0.1-0) |
VO_$<$VO-NAME$>$_VOMSES |
C |
List of entries for the vomses files for this VO. Multiple values can be given if enclosed in single quotes. |
(v >= 3.0.1-0) |
VO_$<$VO-NAME$>$_VOMS_POOL_PATH |
C |
If necessary, append this to the VOMS_SERVER URL for the pool account list . |
(v >= 3.0.1-0) |
VO_$<$VO-NAME$>$_VOMS_SERVERS |
C |
A list of VOMS servers for the VO. |
(v >= 3.0.1-0) |
VO_$<$VO-NAME$>$_RBS |
C |
A list of RBs which support this VO. |
(v >= 3.0.1-0) |
|
|
--- End of VOs variable listing. |
|
MON_HOST |
C |
MON Box hostname. |
(v >= 3.0.1-0) |
MYSQL_PASSWORD |
C |
The mysql root password. |
(v >= 3.0.1-0) |
MY_DOMAIN |
C |
The site's domain name. |
(v >= 3.0.1-0) |
OUTPUT_STORAGE |
C |
Default Output directory for the jobs. |
(v >= 3.0.1-0) |
PX_HOST |
C |
PX hostname. |
(v >= 3.0.1-0) |
QUEUES |
C |
The name of the queues for the CE. These are by default set as the VO names. |
(v >= 3.0.1-0) |
$<$QUEUE NAME$>$_GROUP_ENABLE |
C |
Space separated list of VO names and VOMS FQANs which are allowed to access the queue. It will be translated to pbs's group_enable parameter. |
(v >= 3.0.1-0) |
RB_HOST |
C |
Resource Broker Hostname. |
(v >= 3.0.1-0) |
REG_HOST |
C |
RGMA Registry hostname. |
(v >= 3.0.1-0) |
REPOSITORY_TYPE |
C |
apt or yum . |
(v >= 3.0.1-0) |
RESET_DCACHE_CONFIGURATION |
O |
Set this to yes if you want YAIM to configure dCache for you - if unset (or 'no') yaim will only configure the grid front-end to dCache. |
(v >= 3.0.1-0) |
RESET_DCACHE_PNFS |
O |
yes or no DO NOT set these values to yes on existing production services, dCache internal databases will be deleted. |
(v >= 3.0.1-0) |
RESET_DCACHE_RDBMS |
O |
yes or no DO NOT set these values to yes on existing production services, dCache internal databases will be deleted. |
(v >= 3.0.1-0) |
RFIO_PORT_RANGE |
O |
Optional variable for the port range with default value "20000,25000". |
(v >= 3.0.1-0) |
ROOT_EMAIL_FORWARD |
O |
A space separated list of email addresses to be written into /root/.forward |
(v >= 3.0.1-17) |
SE_ARCH |
C |
=defaults to multidisk - "disk, tape, multidisk, other" - populates GlueSEArchitecture. |
(v >= 3.0.1-0) |
SE_LIST |
C |
A list of hostnames of the SEs available at your site. |
(v >= 3.0.1-0) |
SITE_EMAIL |
C |
The site contact e-mail address as published by the information system. |
(v >= 3.0.1-0) |
SITE_SUPPORT_EMAIL |
C |
The site's user support e-mail address as published by the information system. |
(v >= 3.0.1-0) |
SITE_HTTP_PROXY |
O |
If you have a http proxy, set this variable (syntax is as that of the http_proxy environment variable) and it will be used in config_crl and used by the cron jobs (http_proxy) in order to reduce to load on the CA host. |
(v >= 3.0.1-0) |
SITE_LAT |
C |
Site latitude. |
(v >= 3.0.1-0) |
SITE_LOC |
C |
"City, Country". |
(v >= 3.0.1-0) |
SITE_LONG |
C |
Site longitude. |
(v >= 3.0.1-0) |
SITE_NAME |
C |
Your GIIS. |
(v >= 3.0.1-0) |
SITE_SUPPORT_SITE |
C |
Support entry point ; Unique Id for the site in the GOC DB and information system. |
(v >= 3.0.1-0) |
SITE_TIER |
C |
Site tier. |
(v >= 3.0.1-0) |
SITE_WEB |
C |
Site site. |
(v >= 3.0.1-0) |
TORQUE_SERVER |
C |
Set this if your torque server is on a different host from the CE. It is ingored for other batch systems. |
(v >= 3.0.1-0) |
USERS_CONF |
C |
Path to the file containing a list of Linux users (pool accounts) to be created. This file should be created by the Site Administrator, which contains a plain list of the users and IDs. An example of this configuration file is given in /opt/lcg/yaim/examples/users.conf. |
(v >= 3.0.1-0) |
VOBOX_HOST |
C |
VOBOX hostname. |
(v >= 3.0.1-0) |
VOBOX_PORT |
C |
The port the VOBOX gsisshd listens on. |
(v >= 3.0.1-0) |
|
VOS |
C |
List of supported VOs. |
(v >= 3.0.1-0) |
VO_SW_DIR |
C |
Directory for installation of experiment software. |
(v >= 3.0.1-0) |
WMS_HOST |
C |
Hostname of the gLite WMS/LB server. |
(v >= 3.0.1-0) |
WN_LIST |
C |
Path to the list of Worker Nodes. The list of Worker Nodes is a file to be created by the Site Administrator, which contains a plain list of the batch nodes. An example of this configuration file is given in /opt/lcg/yaim/examples/wn-list.conf. |
(v >= 3.0.1-0) |
YAIM_VERSION |
C |
The version of yaim for which this config file is valid. |
(v >= 3.0.1-0) |
users.conf
This is the file where one can define the unix users to be created on different service nodes (mainly on CE and WNs). The format of each line of this file is the following:
UID:LOGIN:GID1,GID2,...:GROUP1,GROUP2,...:VO:FLAG:
Thus, one line defines one user with
uid
,
login
the user has primary group
gid1
and additional secondary, terciary,... groups
gid2
,
gid3
,... which corresponds to the groups
group1
,
group2
,
group3
, is member of the VO
vo
, and has a special role
flag
, which will be associated with the corresponding line of groups.conf.
Whitespaces and blank lines are not allowed.
This file will be read by the appropriate function and users defined here will be created if they do not already exist.
A short extract as an example:
40197:alice197:1395:alice:alice::
40198:alice198:1395:alice:alice::
40199:alice199:1395:alice:alice::
40001:aliceprd001:1396,1395:aliceprd,alice:alice:prd:
18952:alicesgm001:1397,1395:alicesgm,alice:alice:sgm:
groups.conf
The groups.conf file has the following format
FQAN:group name:gid:users.conf flag:vo
Thus users with
VOMS credential
fqan
will be mapped to
group name
,
gid
, and associated with the users having the same
flag
in
users.conf
. With other words, if
flag
is given, the group name and gid are taken from there and do not need to be specified. If the last - optional - field is defined then the group will be treated as the member of the
vo
instead of the one which is determined from the
fqan
.
A short extract as an example:
"/VO=alice/GROUP=/alice/ROLE=lcgadmin":::sgm:
"/VO=alice/GROUP=/alice/ROLE=production":::prd:
"/VO=alice/GROUP=/alice"::::
"/VO=atlas/GROUP=/atlas/ROLE=lcgadmin":::sgm:
"/VO=atlas/GROUP=/atlas/ROLE=production":::prd:
"/VO=atlas/GROUP=/atlas"::::
"/VO=cms/GROUP=/cms/ROLE=lcgadmin":::sgm:
"/VO=cms/GROUP=/cms/ROLE=production":::prd:
the vo.d directory
The
vo.d
directory is to make the configuration of the DNS-like VOs easier. Each file name in this directory has to be the lower-cased version of e VO name defined in
site-info.def
.
The matching file should contain the definitions for that VO and will overwrite the ones which are defined in
site-info.def
. Again, bash syntax should be followed.
The minor difference in the syntax compared to that of
site-info.def
is that one can ommit the
VO_(VONAME)
prefix from the beginning of the variables.
So, for example while in
site-info.def
:
VO_BIOMED_SW_DIR=$VO_SW_DIR/biomed
VO_BIOMED_DEFAULT_SE=$CLASSIC_HOST
VO_BIOMED_STORAGE_DIR=$CLASSIC_STORAGE_DIR/biomed
in
vo.d/biomed
file it is enough to write:
SW_DIR=$VO_SW_DIR/biomed
DEFAULT_SE=$CLASSIC_HOST
STORAGE_DIR=$CLASSIC_STORAGE_DIR/biomed
Running the configuration
The interface
YAIM comes with a script in
/opt/glite/yaim/bin/yaim
. This one should be used to perform the different configuration steps. The usage of this script is pretty obvious, for help just run
[root@lxn1176 bin]# ./yaim --help
Usage: ./yaim <action> <parameters>
Actions:
-i | --install : Install one or several meta package.
Compulsory parameters: -s, -m
-c | --configure : Configure already installed services.
Compulsory parameters: -s, -t
-r | --runfunction : Execute a configuration function.
Compulsory parameters: -s, -f
Optional parameters : -n
-h | --help : This help
Specify only one action at a time !
Parameters:
-s | --siteinfo: : Location of the site-info.def file
-m | --metapackage : Name of the metapackage(s) to install
-n | --nodetype : Name of the node type(s) to configure
-f | --function : Name of the functions(s) to execute
Examples:
Installation:
./yaim -i -s /root/siteinfo/site-info.def -m glite-SE_dpm_mysql
Configuration:
./yaim -c -s /root/siteinfo/site-info.def -t SE_dpm_mysql
Running a function:
./yaim -r -s /root/siteinfo/site-info.def -n SE_dpm_mysql -f config_mkgridmap
Configuring multiple node type or installing multiple meta-packages you have to define them repetedly on the command line, for example:
./yaim -i -s /root/siteinfo/site-info.def -m glite-SE_dpm_mysql -m glite-BDII
Installing a node
For the OS installation consult the
Generic Installation and Configuration Guide. After setting the
appropriate variables in your
site-info.def
, run the following command:
./yaim -i -s <location of site-info.def> -m <meta-package name>
The table below lists the available meta-packages for SL3 operating system:
Node Type |
meta-package name |
meta-package description |
gLite WMS and LB |
glite-WMSLB |
Combined WMS LB node |
glite CE |
glite-CE |
The gLite Computing Element |
FTS |
glite-FTS |
gLite File Transfer Server |
FTA |
glite-FTA |
gLite File Transfer Agent |
BDII |
glite-BDII |
BDII |
LCG Computing Element (middleware only) |
lcg-CE |
It does not include any LRMS |
LCG Computing Element (with Torque) |
lcg-CE_torque |
It includes the 'Torque' LRMS |
LCG File Catalog (mysql) |
glite-LFC_mysql |
LCG File Catalog |
LCG File Catalog (oracle) |
glite-LFC_oracle |
LCG File Catalog |
MON-Box |
glite-MON |
RGMA-based monitoring system collector server |
MON-Box |
glite-MON_e2emonit |
MON plus e2emonit |
Proxy |
glite-PX |
Proxy Server |
Resource Broker |
lcg-RB |
Resource Broker |
Classic Storage Element |
glite-SE_classic |
Storage Element on local disk |
dCache Storage Element |
glite-SE_dcache |
Storage Element interfaced to dCache without pnfs dependency |
dCache Storage Element |
glite-SE_dcache_gdbm |
Storage Element interfaced to dCache with dependency on pnfs (gdbm) |
DPM Storage Element (mysql) |
glite-SE_dpm_mysql |
Storage Element with SRM interface |
DPM Storage Element (Oracle) |
glite-SE_dpm_oracle |
Storage Element with SRM interface |
DPM disk |
glite-SE_dpm_disk |
Disk server for a DPM SE |
Dependencies for the re-locatable distribution |
glite-TAR |
This package can be used to satisfy the dependencies of the relocatable distro |
User Interface |
glite-UI |
User Interface |
VO agent box |
glite-VOBOX |
Agents and Daemons |
Worker Node (middleware only) |
glite-WN |
It does not include any LRMS |
glite WN |
glite-WN_torque |
The gLite Computing Element with torque client |
And the table below lists the available meta-packages for SL4 operating system:
glite WN |
glite-WN_compat |
The gLite Computing Element with torque client |
Configuring a node
If the installation was successful one should run the configuration:
./yaim -c -s <location of site-info.def> -n <node type1> -n <node type2>
Each node type is a configuration target and if there is more than one installed and to be configure on a phisical node then theri configuration should be run together and
not separately. The available configuration targets are listed below:
Node Type |
Configuration target (node type) |
Description |
gLite WMS and LB |
WMSLB |
Combined WMS LB node |
glite CE |
gliteCE |
The gLite Computing Element |
FTS |
FTS |
gLite File Transfer Server |
FTA |
FTA |
gLite File Transfer Agent |
BDII |
BDII |
A top level BDII |
A site BDII |
BDII_site |
A site level BDII |
Computing Element (middleware only) |
CE |
It does not configure any LRMS |
Computing Element (with Torque) * |
CE_torque |
It configures also the 'Torque' LRMS client and server (see 12.6. for details) |
LCG File Catalog server * |
LFC_mysql |
Set up a mysql based LFC server |
MON-Box |
MON |
RGMA-based monitoring system collector server |
e2emonit |
E2EMONIT |
RGMA-based monitoring system collector server |
Proxy |
PX |
Proxy Server |
Resource Broker |
RB |
Resource Broker |
Classic Storage Element |
SE_classic |
Storage Element on local disk |
Disk Pool Manager (mysql) * |
SE_dpm_mysql |
Storage Element with SRM interface and mysql backend |
Disk Pool Manager disk * |
SE_dpm_disk |
Disk server for SE_dpm |
dCache Storage Element |
SE_dcache |
Storage Element interfaced with dCache |
Re-locatable distribution * |
TAR_UI or TAR_WN |
It can be used to set up a Worker Node or a UI (see 12.9. for details) |
User Interface |
UI |
User Interface |
VO agent box |
VOBOX |
Machine to run VO agents |
Worker Node (middleware only) |
WN |
It does not configure any LRMS |
Worker Node (with Torque client) |
WN_torque |
It configures also the 'Torque' LRMS client |
Partial configuration
If there is no need to reconfigure the whole node because of a small configuration change, one can rerun only one configuration function. See the following example:
./yaim -r -s /root/siteinfo/site-info.def -f config_mkgridmap [ -n SE_dpm_mysql ]
Divers
- There is a web page intended to help sysadmins in figuring out the YAIM setting for various VOs. This is the YAIM tool. Site administrators can use this utility to maintain a list of the VOs their site supports and to automatically generate the appropriate YAIM fragment to be included into theis site configuration files.
On gLite middleware release specific isuues
This is YAIM guide only, the gLite middleware release specific issues will always be advertised in the
General Installation and Configuration Guide, with refeering back the correct version of YAIM to be used.
Changes respect to 3.0.0-x series, what's new ?
YAIM's hierarchical configuration storage
Current
site-info.def
file is replaced by the directory structure enabling to structure configuration parameters, and will give the possibility of creating a configuration for the whole site including multiple CEs, RBs, etc. (in the future releases). The current structure of the hierarchical configuration storage is the following:
../siteinfo/
|- site-info.def
|- vo.d/
The
site-info.def
file contains all globally defined parameters. The entries in the
vo.d/
directories are optional and the configuration will (should) work correctly with use of standard site-info.def file.
DNS-like VO names
From YAIM 3.0.1 the DNS like VO name support is implemented. This uses a new hierarchical configuration storage. VO parameters are defined in the files located in the vo.d directory. There is one file per VO. The name of the file should exactly be the lower-case variant of the VO name. They contain key value pairs where key is the same as in
site-info.def
parameter name, without the 'VO_<VO_NAME>_' prefix. The following list shows the definition of the dteam VO, as an example:
# cat vo.d/dteam
SW_DIR=$VO_SW_DIR/dteam
DEFAULT_SE=$CLASSIC_HOST
STORAGE_DIR=$CLASSIC_STORAGE_DIR/dteam
QUEUES="dteam"
SGM=ldap://lcg-vo.cern.ch/ou=lcgadmin,o=dteam,dc=lcg,dc=org
USERS=ldap://lcg-vo.cern.ch/ou=lcg1,o=dteam,dc=lcg,dc=org
VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/dteam?/dteam/' 'vomss://voms.cern.ch:8443/voms/dteam?/dteam/'"
VOMSES="'dteam lcg-voms.cern.ch 15004 /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch dteam' 'dteam voms.cern.ch\
15004 /C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch dteam'"
Adding a new VO
Do the following steps to add a new VO to your configuration:
- creation of new VO definition file in the vo.d directory
- adding new VO users to the users.conf
- adding new VO entry into groups.conf
- adding new VO to the VOS parameter in the site-info.def file
Changes in the groups.conf file
There is an additional 5th field in the
groups.conf
file. If defined, it is the VO name. If this field is present than the users determined by the
VOMS FQANs will be treated as the members of that VO and not that one which is defined by the FQAN string. (This is because there is a VO which is technically a subgroup of an other VO.)
Special accounts management
No more single sgm user. The
sgm
and
prd
tags (and all tag in general) are mapped to pool users whith let's say
dtesgm
group as primary group while they have the
dteam
as secondary group. See the new format of
users.conf
documented in
users.conf.README
. It is only the
VOBOX
and
SE_castor
on which the single
sgm
account has been preserved temporarily.
Queue management
Instead of creating queues having the same names as of the VOs, now it is possible to define the list of queues and their access control list. (For pbs/torque).
The (QUEUENAME)_GROUP_ENABLE= variables are space separated list of VO names and
VOMS FQANs. It defines which groups are allowed to acces the queue. Example:
OPS_GROUP_ENABLE="ops /VO=atlas/GROUP=/atlas/ROLE=lcgadmin"
While defining the (QUEUE NAME)_GROUP_ENABLE variable, the subgroups of a VO should be listed explicitely, since torque takes into account only the primary group of a user.
So for example if only
dteam
defined for a queue then =dteamsgm=s won't be able to submit to this queue. One has to define:
DTEAM_GROUP_ENABLE="dteam /VO=dteam/GROUP=/dteam/ROLE=lcgadmin"
As a consequance the VO_${VO}_QUEUES variable are deprected by now.
This information will be reflected in the information system via the
GlueVOViewLocalID: /VO=cms/GROUP=/cms/StandardModel
GlueCEAccessControlBaseRule: VOMS:/VO=cms/GROUP=/cms/StandardModel
Glue attributes.
Every FQAN which appears in these GROUP_ENABLE variables should be defined in
groups.conf
!
Yaim logging
The
YAIM_LOGGING_LEVEL
variable has been added to
site-info.def
file. Possible values:
NONE
,
ABORT
,
ERROR
,
WARNING
,
INFO
,
DEBUG
. If not defined default value is INFO. The implementation of logging in yaim is still in very preliminary phase, for testing and for your feedbacks. The logfile contains the timestamp , the command and the output of the commands, it's location is /opt/glite/yaim/log/yaimlog or ../log relative to the bin/yaim script. Logging is enabled only if you use the new interface (see next section).
New interface
In order to ensure that the definitions and the environment is the same in
configure_node
,
install_node
,
run_function
, we implemented a new interface. From now on one should use
this new script for performing yaim operation. It enables the logging, it uses getopt and switches instead of fixed order of parameters. See
/opt/glite/yaim/bin/yaim --help
for durther details.
"Backward compatibility"
It is required to modify your 3.0.0-x
site-info.def
file by adding the
${QUEUE}_GROUP_ENABLE
parameters to the
site-info.def
file. As soon as this is done, yaim-3.0.1 will correctly configure your node. No modification of VO related parameters is needed unless new DNS like vo should be configured. Yaim 3.0.1-X accepts for non DNS like VOs both old and new VO configurations.
Known issues
For furher reading
Other useful documentation: