Pictures | |
---|---|
Overview of TDAQ system | |
Functional diagram of the ATLAS Trigger and Data Acquisition system in Run 3 | |
Functional diagram of the ATLAS Trigger and Data Acquisition system in Run 2 showing expected peak rates and bandwidths through each component. | |
Schematic of the ATLAS Trigger and Data Acquisition system in Run 2 with specific focus given to the components of the L1 Trigger system. | |
TDAQ Networking | |
Schematic of the Run 2 data acquisition network showing link utilisation. | |
Performance Plots with 2016 data | Run 311287, started on Sun Oct 23 2016, 04:42 UTC. Peak Stable Lumi: 1.3x1034 cm-2 s-1 Peak Stable Delivered Physics Luminosity: 416.8 pb-1 |
Average HLT bandwidth as a function of instantaneous luminosity. The largest instantaneous luminosity delivered in Run 1 is shown by the vertical dashed line. | |
Evolution of SFO output bandwidth during a run. The maximum performance value observed in Run 1 is shown by the horizontal dashed line. | |
Evolution of the average buffer occupancy for a typical single readout channel during a run. The per-channel limit in Run 1 of 64 MB is shown by the horizontal dashed line. | |
Average ROS buffer occupancy for a single readout channel plotted against number of processing instances in use in the HLT farm. The per-channel limit in Run 1 of 64 MB is shown by the horizontal dashed line. | |
Performance Plots with 2018 data | |
Cumulative data recording efficiency versus total integrated luminosity collected by the ATLAS experiment for proton-proton collisions in each data-taking year between 2011 and 2018 | |
Relative frequency of selected-event sizes for a typical ATLAS Run during 2018 operations, including pre-stable-beams period. The different event size populations visible in the plot are due to complete and partially built streams, as well as from different detector readout configurations | |
Relative frequency of selected-event sizes for a typical ATLAS Run during 2018 operations, including pre-stable-beams period. The stream classification shown reflects the events used for physics analyses; the streams used for data quality monitoring; and other calibration streams containing either physics events using partial event building or detector calibration data | |
Fraction of HLT output throughput used by each HLT stream for a typical ATLAS run during 2018 operations, including pre-stable-beams period. All streams are shown including physics analyses, calibration and monitoring streams | |
Rates of ERS events handled and archived by MTS service during operations in Sept-Oct 2019. Presented event rates per second, maximum and average per day. | |
Archived data in PBeast per hour, for few days in ATLAS operations in October 2019. Data provided by PBeast metrics. | |
Archived data in PBeast per day, for few days in ATLAS operations in October 2019. Data provided by PBeast metrics | |
FTK AMB plots, on spy dumps and cooling. Details in ATL-COM-DAQ-2018-172.pdf | |
L1Topo HW-SW mismatch.Details in ATL-COM-DAQ-2018-170.pdf | |
L1Topo HW-SW mismatch.Details in ATL-COM-DAQ-2018-170.pdf | |
Rate of monitoring data updates in ATLAS partition during few minutes of typical data taking period with 30" HLT publication spikes. | |
Rate of monitoring data updates in user partitions during second milestone week (M2) in 2018. | |
Average PBEAST REST interface requests rate during several months of Run II. Most requests are from auto-refresh of Grafana dashboards. | |
High rate of PBEAST queries during few minutes in one of the nightly data taking runs (May, 18 2018). |