Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

2017 International Conference on High Performance Computing & Simulation (HPCS)(2017)

引用 3|浏览23
暂无评分
摘要
Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world operational data. We show which efforts are required to minimize the error for such a configuration, and explain possible reasons for the most important outliers we are observing. Furthermore, we use this tool to derive an operational envelope of the system, which describes the minimal amount of resources required to fulfill certain real-time guarantees.
更多
查看译文
关键词
CERN,ATLAS,Data Acquisition System,Simulation,Modeling,Omnet++
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要