For internal discussion

  1. Development installation of FTS

Main idea to increase the turnaround time of 6 month from development to first usage of experiment (allow us to find bugs, improving code quality on both sides, improve/understand usage...)

Considered useful by everyone. There might be issues of sharing the network bandwidth. As we do not plan to run the real Tier-0 exercise using this infrastructure, it should not really be a concern.

  1. What are the new interesting features of the next version of FTS.

Evidently stability of operation is the main feature. But there is a list of new features that might be important for ATLAS. It would be important to study them in particular

  • Delegation (more technical but important in the future)
  • VOMS roles - to which extend FTS is VOMS aware ? Does VOMS cause problems etc ...
  • New state(s) in the state machine (useful for monitor?)
  • Separate queue for storage interaction and transfers
  • Prestaging

  1. Question to be answered:

What is happening in the time between a subscription to DQ2 and the time when the data is arriving at a site. Is the subscription stuck in DQ2 or in FTS. We have to be able to monitor these times.

It would be also interesting to query these queues.

  1. Testing/Pinging or similar of services.

Today it can be that someone makes a subscription and it is not executed because some service, cron job or similar is not running on some VOBOX. Do we have user tools to access if my subscription would work in principle, if i am only patient enough.

  1. SRM 2.2

Again, what are the interesting features for ATLAS:

  • Space reservation
  • Prestaging
  • Other things ??
  • Does srm-ls work ????

  1. There was also discussion on SSL based security (no globus). It was on Maarten list. Is that something we could push ????

Its only required for srmcopy, which is not implemented for DPM and not used by FTS. So to use globus and not SSL is .

  1. Local consistency catalog to SE

So the question here is is the catalog and the SE consistent. I know that there are some scripts around, some even accessing the MySQl DB of LFC ...

In any case consistency is an important issue and it will be a more important issue in the future.

I guess we should ask Stephane on details.

  1. Global catalog consistency - dataset location catalog - is it useless

As I expressed already several time i have my reservation on the Dataset Location Catalog. We have to see what to do with that.

  1. Push Jean Phillips Baud modification to deployment as fast as possible on the servers

Lets find out what the status is and if we can smooth ouyt the path. Are there still tests necessary ?

  1. Push the new LFC UI to all sites - important for dq2 end user tools

If we have the new server we would like also to use the new UI. Can that be done in time for the new dd2_end user routines beeing written by Tadashi ?

  1. Pages by Stephane for all Tier-1 + adaption to new LFc features

I understand Ricardo will have better access to the information in the future.

But I think for the current distribution of data to all Tier-1 as asked by ATLAS physics management, the pages of Stephane are fine enough. Can we get them and run it as a service at CERN and on other relevant Tier-1 ?

  1. Are datasets really correct ? We should compare the datasets as available with DQ2 with prodsys database. (guids, size, checksum)

Have a look at the BNL pages ... one could do even more ...

Currently I have a problem with the link, because its seems the link has changed ....

http://www.usatlas.bnl.gov/~dial/atprod/validation/html/bnl_datasets.html

  1. Open/Close datasets

I think its very important. Only closed datasets can conceptually be complete at sites!!!!!

  1. Files based brokering for GANGA. This is for the case that datasets on the Tier-1 sites will turn out to be all incomplete ...

To so

  • requires retrival of replicas from all LFCs
  • can be done in paralell with different processes.
  • better with new LFC
  • Can be also done with future location index
  • splitting with knowledge of which files are on which sites

  1. Do we need SRm for analysis or not

As of today i still do not know if SRM is required for analysis or not.

It seems in all cases a trivial translation between the storage URL and the transport URl is possible.

I verified also that in the case of DPM this includes redirection to the correct server. Its build into rfio_open.

Question 1: What do I gain if I ask SRM to do this translation for me. is there any benefit ?

Question2: If there is no benefit, what is technically the best way to catalog the trivial translations. While they are trivial only SRM knows about these translations. They are not published in the IS. Should be add it to the TiersofATLAS ?

  1. We need a LCG friendly DQ2 end user tool for GANGA

  • generation of PoolFileCatalog for local access of data
  • we use right now stuff from Tadashi + adaption from Johannes

Required features

* Create a PoolFilecatalog for files of dataset that are avaialble locally

  • Automatic site recognition based on TiersOfATLAS info
  • Support for backdoor - where posssible and where required e.g. castor @ CERN has such a straight translation schema
  • SRM based access using LCG
    • resolve all storage URLs in one go
  • SRM based using GFAL
    • use GFAL to resolve with SRm on demand.

  1. Backport GFAL plugin for ROOT 5.10.00e
Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2007-01-25 - MassimoLamanna
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    ArdaGrid All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback