Installation
In order to use the mid-range servers which do not have AFS access, create a COOL standalone installation :
http://cool.cvs.cern.ch/cgi-bin/cool.cgi/cool/config/doc/README.laptopLinux3?rev=1.1&content-type=text/vnd.viewcvs-markup
On your machine
Check out COOL in ~/myLCG/COOL_2_1_1 (for example)
Go to ~/myLCG/COOL_2_1_1/src/config/scram/
Execute ./coolMirrorExternals.csh with these modifications :
- add
setenv PATH /afs/cern.ch/sw/lcg/app/spi/scram:${PATH}
unsetenv SCRAM_HOME
unsetenv ORACLE_HOME
unsetenv PYTHONPATH
unsetenv ROOTSYS
- change \rm to rm (twice)
- remove > /dev/null
- change /afs/cern.ch/user/a/avalassi/myLCG/mySCRAM to /afs/cern.ch/sw/lcg/app/spi/scram
Execute ./coolMirrorExternals-slc4_ia32_gcc34.csh
Now there should be an archive (lcg-slc4_ia32_gcc34.tar) in /opt/rbasset/coolKit-slc4_ia32_gcc34/sw.
Copy the externals tar file (this takes ~3 minutes):
scp -2 /opt/rbasset/coolKit-slc4_ia32_gcc34/sw/lcg-slc4_ia32_gcc34.tar oracle@midrangeserver:/data/Cool
Create a COOL tar files from the current VERSION:
cd ~/myLCG/COOL_2_1_1/
tar -cvf src.tar src
tar -cvf SCRAM.tar .SCRAM
Copy the COOL tar files (this takes ~2 minutes):
scp -2 ~/myLCG/COOL_2_1_1/src.tar oracle@midrangeserver:/data/Cool
scp -2 ~/myLCG/COOL_2_1_1/SCRAM.tar oracle@midrangeserver:/data/Cool
Untar the externals tar:
mkdir /data/Cool/sw
cd /data/Cool/sw
tar -xvf ../lcg-slc4_ia32_gcc34.tar > ../lcg-slc4_ia32_gcc34.txt
Add the following to .cshrc:
setenv PATH /data/Cool/sw/lcg/app/spi/scram:$PATH
setenv SCRAM_ARCH slc4_ia32_gcc34
setenv SITENAME RAC4
Edit /data/Cool/sw/lcg/app/spi/scram/scram and add the following line:
$ENV{SCRAM_HOME}="/data/Cool/sw/lcg/app/spi/scram/V0_20_0";
Untar the COOL tar:
mkdir /data/Cool/COOL_2_1_1
cd /data/Cool/COOL_2_1_1
rm -rf /data/Cool/COOL_2_1_1/.SCRAM
rm -rf /data/Cool/COOL_2_1_1/src
tar -xvf ../SCRAM.tar
tar -xvf ../src.tar
Modify /data/Cool/COOL_2_1_1/src/config/scram/site/tools-RAC4.conf
- +TNSADMIN:/data/Cool/Tests (put tnsnames.ora in this folder)
- versions differing from tool-CERN.conf
Setup and build COOL:
cd /data/Cool/COOL_2_1_1/src
scram setup
scram b
In case there is a problem with NFS :
Check the NFS server :
on nfssever do:
/etc/init.d/portmap status
if not running :
/etc/init.d/portmap start
on midrange servers :
/etc/init.d/portmap status
if not running :
/etc/init.d/portmap start
mount
if /data :
sudo umount /data
and finally :
sudo mount nfsserver:/data /data
Export environment variables before to execute the clients :
in bashrc
export PATH=/data/Cool/sw/lcg/app/spi/scram:$PATH
export SCRAM_ARCH=slc4_ia32_gcc34
export SITENAME=RAC4
eval `cd /data/Cool/COOL_2_1_1;scram runtime -sh`
export PYTHONPATH=/data/Cool/sw/lcg/app/releases/Persistency/CORAL_1_8_0/slc4_ia32_gcc34/lib:/data/Cool/sw/lcg/app/releases/Persistency/CORAL_1_8_0/slc4_ia32_gcc34/python:$PYTHONPATH
Performance test
The tests have different goals :
- stress test a COOL software
- stress test an Oracle database
- generate 1 year of data in order to test the effectiveness of the COOL queries on a filled database
- test different use cases that will appear during production
First results
While trying to insert half a year of data into the database we observed a memory problem on the clients.
In order to insert data with a higher rate we used the bulk insertion method.
Unfortunately, it seems there is a memory leak when using this method.
http://cool.cvs.cern.ch/cgi-bin/cool.cgi/cool/PyCool/examples/memoryLeak.py?rev=1.1&content-type=text/vnd.viewcvs-markup
https://savannah.cern.ch/task/?4912
Results of a test with a C++ client show no memory leak.
Use C++ COOL bulk storage client
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:../lib
export CORAL_AUTH_PATH=${HOME}/private
Data samples
Atlas DCS data sets for performance tests on CMSONR under ST_CLIENT schema :
* PDBST001 : 7 days of data
* PDBST002 : 25 days of data
* PDBST003 : 50 days of data
* PDBST004 : 100 days of data
* PDBST005 : 150 days of data
* PDBST006 : 200 days of data
The data is inserted with validity keys starting from 0 (1970, January 1rst), consequently we can do the insertion tests with IOVs starting from the date of the test and retrieval tests using the IOVs starting from 0.
With the results from
OLDCoolDataOverhead, we can quite accurately assume that the amount of space required for one year of data will be around 1
TeraByte.
Creation of the oracle account
SQL> CREATE BIGFILE TABLESPACE "ST_TABLESPACE" DATAFILE'+TEST2_DATADG1' SIZE 1G AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ;
Tablespace created.
SQL> CREATE USER "ST_CLIENT" PROFILE "CERN_DEV_PROFILE" IDENTIFIED BY "*********" DEFAULT TABLESPACE "ST_TABLESPACE" ACCOUNT UNLOCK ;
User created.
SQL> GRANT "CONNECT" TO "ST_CLIENT" ;
Grant succeeded.
SQL> GRANT FLASHBACK ANY TABLE TO "ST_CLIENT";
Grant succeeded.
--
RomainBasset - 31-Jul-2007