Spatial and Temporal Variations of Thermal Contrast in the Planetary Boundary Layer
JOURNAL OF REMOTE SENSING(2024)
Univ Libre Bruxelles ULB
Abstract
High-spectral resolution infrared sounders on board satellites can measure atmospheric trace gases confined to the planetary boundary layer (PBL). However, their sensitivity to the PBL depends on the temperature difference between the surface and the atmosphere, the so-called thermal contrast (TC). After reviewing the physical aspects of TC and how it drives measurement sensitivity, we characterize the global and temporal behavior of TC in clear-sky conditions. Combining land surface temperatures from the Copernicus Global Land Services dataset with air temperatures from the European Centre for Medium -Range Weather Forecasts reanalysis v5, we obtain global monthly averages of TC at high spatial (31 km) and temporal (1 h) resolution. TCs are analyzed as a function of time of the day, time of the year, location and land cover. Daytime maxima are observed from 1130 to 1330 local time, from 5-10 K in winter to 10-30 K in summer. A large dependency on land cover type is observed, both in the magnitude of the daily variations, and in the seasonality. For bare soils, shrublands, sparse and herbaceous vegetation, a maximum is seen in summer with daily TC amplitudes over 30 K. In contrast, for forests, wetlands, and croplands, the seasonal maximum occurs in spring, with daily variations below 15 K. Nighttime TCs typically range between -5 and -10 K. Occasionally, very favorable nighttime measurement conditions occur during winter and autumn due to large temperature inversions. Throughout the paper, we illustrate important concepts by means of satellite observations of NH 3 over the Po Valley (Italy).
MoreTranslated text
Key words
Land Surface Temperature,Air Quality
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined