• Will your site be willing to participate in the tape test?
    • For multi-VO site, for which VOs then?
      • CNAF is available to participate in the tape test for all WLCG VOs.

  • What is the maximum available tape throughput of the site?
    • 11,6 GB/s

  • Number of tape drives and drive types
    • 16 Oracle T 10000D and 19 IBM TS1160

  • Does the site have hard-partition between read and write?
    • No. All tape drives can be used to read (max 11,6 GB/s). WLCG writing is performed only by the 19 IBM TS1160 tape drives (max 7,6 GB/s). WLCG data written after April 2020 can be read only by the 19 IBM TS1160 tape drives (max 7,6 GB/s).

  • For multi-VOs, does the site have hard-partition between VOs?
    • No. All tape drives are shared between VOs (WLCG + other 25 VOs)

  • How the tape drives are allocated among VOs and for write and read?
    • Writing: each WLCG VOs has a maximum configurable number of tape drives usable at the same time (usually up to 4). The majority of no-LHC VOs has a single GPFS file system and this has a single configurable number of tape drives usable at the same time (usually up to 4).
    • Reading: a software orchestrator dynamically allocates drives to VOs on the basis of free drives and queuing recall processes. Each VO can use up to 8-10 tape drives. If tape drives are occupied, recalls remain on queue. So, writing activities have precedence.

  • Tape disk buffer size
    • For both read and write
      • ALICE: 573 TB (GPFS file system dedicated to buffer)
      • ATLAS: 573 TB (GPFS file system dedicated to buffer)
      • CMS: up to 8.1 PB (GPFS file system shared between pure disk and buffer). The minimum size guaranteed for buffer is 594 TB.
      • LHCb: up to 6.9 PB (GPFS file system shared between pure disk and buffer). The minimum size guaranteed for buffer is 485 TB.

  • Any other info the site finds relevant ans wants to share. For example, how much of the tape infrastructure is shared between experiments, what is the impact of other communities, etc.
    • All tape drives are shared between all VOs running at CNAF. Anyway, the system should be able to support data rate expected by WLCG VOs for 2022.

Please see some additional questions to answer

  • According with the LHC experiment expected throughputs YY on site XX how much you could offer for the 2nd week of October and what percentage of hardware you will have during that date?
    • We should be able to satisfy writing and reading requirements by the 2nd week of October.

  • When do you expect to have required hardware in place?
    • We already have servers and tape drives sufficient to satisfy throughput requirements.

-- JuliaAndreeva - 2021-06-25

Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r5 - 2021-08-05 - EnricoFattibene
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback