Will your site be willing to participate in the tape test?
Yes
For multi-VO site , for which VOs then?
Single VO site - ATLAS
What is the maximum available tape throughput of the site?
The tape system at BNL is designed to provide the input and output performance that is specified by requirements negotiated with each VO, based on the characteristics of the VO’s I/O patterns
Note the characteristics of the VO’s data output AND input patterns, i.e., how the VO reads and writes data, have a significant impact on the resources needed by the VO to achieve the negotiated I/O performance requirements.
The system is designed to meet the requirement for each VO, independent of the usage of the system by other VO’s
Number of tape drives and drive types
The number of tape drives for each VO is determined by the I/O requirements for the VO and, as previously stated, is tied to the characteristics of the VO’s I/O patterns.
ATLAS utilizes LTO drive technology, with LTO-8 and LTO-7 drives in the system.
ATLAS is currently writing LTO-7 tapes, but will likely switch to LTO-8 in the near future.
ATLAS has 12 LTO-7 and 20 LTO-8 tape drives.
Does the site have hard-partition between read and write?
The resources ATLAS uses to access tape runs the gamut from hard partitioning (dCache disk resources) to shared read/write (tape system disk cache). Tape drive resources are “soft” partitioned in the sense that drives can be moved between reads and writes depending on medium term demands.
For multi-VOs, does the site have hard-partition between VOs?
BNL hard partitions resources between VOs
How the tape drives are allocated among VOs and for write and read?
Allocation of tape drive resources, in all cases, is determined by the VO’s read and write requirements.
For ATLAS, write resources are allocated based on a combination of recent past history and requested bandwidth from ATLAS. (Getting verification of this from the HPSS team)
Tape disk buffer size
For read
Tape disk buffer size is currently dictated by VO I/O bandwidth requirements. The latter dictates the required disk “spindle” count, that in turn determines the disk buffer size.
570 TB - spread across many dCache pools and disk servers - We are increasing this amount.
For write
Tape disk buffer size for writes is currently dictated by VO I/O bandwidth requirements. The latter dictates the required disk “spindle” count, that in turn determines the disk buffer size.
385 TB - spread across many dCache pools and disk servers
Any other info the site finds relevant ans wants to share. For example ,how much of the tape infrastructure is shared between experiments, what is the impact of other communities, etc.
Please see some additional questions to answer
According with the LHC experiment expected throughputs YY on site XX how much you could offer for the 2nd week of October and what percentage of hardware you will have during that date?
During data taking ATLAS expects to reads at 0.5 GB/s and write at 2.4 GB/s. After data taking reads and writes are expected to be 1.9 GB/s and 1.1 GB/s respectively. We will be able to meet ATLAS' needs as we often exceed those limits already. It is important to understand from a site perspective what will be the data volume at those rates.
When do you expect to have required hardware in place?
Hardware needed to achieve these goals are already in place, although it should be noted that additional hardware will be added after the tape challenge.