US ATLAS and US CMS HPC and Cloud Blueprint

Fernando Barreiro Megino,Lincoln Bryant, Dirk Hufnagel,Kenyi Hurtado Anampa

arXiv (Cornell University)(2023)

引用 0|浏览16
暂无评分
摘要
The Large Hadron Collider (LHC) at CERN houses two general purpose detectors - ATLAS and CMS - which conduct physics programs over multi-year runs to generate increasingly precise and extensive datasets. The efforts of the CMS and ATLAS collaborations lead to the discovery of the Higgs boson, a fundamental particle that gives mass to other particles, representing a monumental achievement in the field of particle physics that was recognized with the awarding of the Nobel Prize in Physics in 2013 to Fran\c{c}ois Englert and Peter Higgs. These collaborations continue to analyze data from the LHC and are preparing for the high luminosity data taking phase at the end of the decade. The computing models of these detectors rely on a distributed processing grid hosted by more than 150 associated universities and laboratories worldwide. However, such new data will require a significant expansion of the existing computing infrastructure. To address this, both collaborations have been working for years on integrating High Performance Computers (HPC) and commercial cloud resources into their infrastructure and continue to assess the potential role of such resources in order to cope with the demands of the new high luminosity era. US ATLAS and US CMS computing management have charged the authors to provide a blueprint document looking at current and possibly future use of HPC and Cloud resources, outlining integration models, possibilities, challenges and costs. The document will address key questions such as the optimal use of resources for the experiments and funding agencies, the main obstacles that need to be overcome for resource adoption, and areas that require more attention.
更多
查看译文
关键词
us cms hpc,atlas,cloud
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要