AMBER Experiment’s Online Filter System for Virtualised IT Infrastructure
IEEE Transactions on Nuclear Science(2024)
Czech Technical University in Prague
Abstract
The operation of high-level trigger systems in high-energy physics experiments requires the utilisation of substantial computing resources. Typically, these systems are constructed as computing farms with cutting-edge expensive hardware to provide sufficient computing power. Usually the systems are situated on-site and process detector data in real-time to minimize latency. This paper presents an alternative high-level filter system designed for the AMBER experiment at CERN. The novel aspect of our approach is its high efficiency, which removes the necessity for a dedicated on-site computer farm. Instead, it make use of existing shared resources located within the CERN data center. The proposed system is capable of efficiently handling the data generated by the medium-sized experiment and performing numerous parallel filtering tasks in real time. All system components operate within a shared, fully virtualized environment, including databases, storage, and processing units. This flexible environment scales effectively, allowing for adjustments to allocated resources in accordance with agreements with service managers. We present the architectural design and the implementation of such a system. To demonstrate its capabilities, we have conducted a series of various measurements assessing its performance, latencies, and stability under maximum (expected) loads. The results demonstrate the resilience and reliability of the filtering system while optimizing overall costs to a minimum.
MoreTranslated text
Key words
Data acquisition,Data handling,High energy physics computing,Software performance,Readout systems,Parallel processing
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined