• Will your site be willing to participate in the joint tape test?
    • Yes, for both ATLAS and ALICE

  • What is the maximum available tape throughput of the site?
    • This is really hard to quantify. NDGF-T1 is a distributed site and there are 5 different tape libraries available with fairly large differences in available type/number of tape drives, maximum aggregated speed, amount of stored data and datasets and amount of available write space for the different VO:s (starting at 0). We can calculate a theoretical read speed for a VO, but since a particular dataset typically isn't evenly spread over all the tape libraries this would be highly theoretical. Several of the site are also in the middle of an upgrade session which will affect the numbers greatly.

  • Number of tape drives and drive types
    • Same as last question. There are 4 (soon 5) tape libraries with a mix of LTO5/6/7/8 and some IBM Jaguar model. Several of these libraries are also shared with other non-WLCG organizations which makes the "number of tape drives" hard to define. It could be 8 for a tape library if not used by someone else at the moment or used for tape reclamation.

  • Does the site have hard-partition between read and write? * No. The disk based frontends (dcache pools) are mostly dedicated for an experiment and either read or write. So uploads and downloads of data should not conflict once staged from tape to disk. Of course, these pools share the same tape drives in the library, so there is no hard partition on that side.

  • For multi-VOs, does the site have hard-partition between VOs? * No, only two of the 5 libraries have both ATLAS and ALICE data, but they share tape drives (but not individual tapes).

  • How the tape drives are allocated among VOs and for write and read?
    • No particular allocation. First come first served.

  • Tape disk buffer size
    • For read: ALICE: 42TiB, ATLAS: 62TiB
    • For write: ALICE: 50TiB, ATLAS: 54TiB

  • Any other info the site finds relevant and wants to share. For example, how much of the tape infrastructure is shared between experiments, what is the impact of other communities, etc.
    • Some libraries are out of free tapes and thus not available for writes which also affects all above.

Please see some additional questions to answer

  • According with the LHC experiment expected throughputs YY on site XX how much you could offer for the 2nd week of October and what percentage of hardware you will have during that date?
    • We should be able to cover the numbers in the table. Percentage of hardware needed is hard to say, it depends a LOT on which datasets that are staged.

  • When do you expect to have required hardware in place?
    • 2 of the 5 libraries are under upgrade currently. Data migration will take most of the fall and will not be done until 2nd week of October. But if the right datasets has been migrated it will affect the outcome. We will spend September making sure everything that is available functions at as well as possibly.
Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r3 - 2021-08-31 - JensHenrikLarsson
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback