Understanding the Way Machines Simulate Hydrological Processes - A Case Study of Predicting Fine-Scale Watershed Response on a Distributed Framework.

IEEE Trans. Geosci. Remote. Sens.(2023)

引用 0|浏览8
暂无评分
摘要
This study developed a deep neural network (DNN)-based distributed hydrologic model for an urban watershed in the Republic of Korea. The developed model is composed of multiple long short-term memory (LSTM) hidden units connected by a fully connected layer. To examine the study area using the developed model, time series of 10-min radar-gauge composite precipitation data and 10-min temperature data at 239 model grid cells with 1-km resolution is used as inputs to simulate 10-min watershed flow discharge as an output. The model performed well for the calibration period (2013-2016) and the validation period (2017-2019), with Nash-Sutcliffe efficiency coefficient values being 0.99 and 0.67, respectively. Further in-depth analyses were performed to derive the following conclusions: 1) the map of runoff-precipitation ratios produced using the developed DNN model resembled imperviousness ratio map of the study area from the land cover data, revealing that the DNN successfully deep-learned the precipitation partitioning processes only with the input and output data without depending on any priori information about hydrology; 2) the model successfully reproduced the soil moisture-dependent runoff process, an essential prerequisite of continuous hydrologic models; and 3) each LSTM unit has a different temporal sensitivity to the precipitation stimulus, with fast-response LSTM units having greater output weight factors near the watershed outlet, which implies that the developed model has a mechanism to separately consider the hydrological components with distinct response time such as direct runoff and the groundwater-driven baseflow.
更多
查看译文
关键词
Deep learning, distributed hydrologic model, hydrology, long short-term memory (LSTM), machine learning, radar precipitation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要