HST in the Clouds: 25 Years of HST Processing

Astronomical Society of the Pacific Conference Series(2017)

引用 0|浏览4
暂无评分
摘要
The Hubble Space Telescope (HST) archive system at the CADC, ESAC and STScI has been evolving constantly since they started to archive HST data in 1990. After basic upgrades to the associated storage system (optical disks, CDs, DVDs, magnetic disks) and implementing multiple processing systems (On the Fly calibration, CACHE), the HST archive system at CADC is now running in a cloud based processing system. After multiple hurdles mostly caused by the way the HST calibration system had been designed many years ago, we are now reporting a working system under the CANFAR cloud, Gaudet et al. (2009), designed and operated by CADC and hosted in Compute Canada Cloud infrastructure. Although not very large, the HST collection needs constant recalibration to take advantage of new software and calibration files. Here we describe the unique challenges in bringing legacy pipeline software to run in a massive cloud computing system. The HST processing system can, in principle, be easily scaled. Presently more than 200 cores could be used to process the HST images, and this could potentially grow to thousands of cores, allowing a very uniformly calibrated archive since any perturbation to the system could be dealt with within a few hours. We will discuss why this might be not possible and will try to propose solutions.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要