分布式水文模型的GPU并行化及快速模拟技术
DOI:
作者:
作者单位:

作者简介:

刘永和 (1976-),男,内蒙古卓资人,博士,副教授,硕士生导师,主要从事气象资料统计降尺度、分布式水文模型以及地球信息科学研 究。 E-mail:sucksis@163.com

通讯作者:

中图分类号:

P338

基金项目:

国家自然科学基金项目(41105074,40975048);中科院数字地球重点实验室开放基金项目(2011LDE010);河南理工大学博士基金项目(B2011-038);


GPU Parallel Computing and Fast Simulation of Distributed Hydrological Models
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    分布式水文模型对流域水文过程的应用深度及广度不断加深,常与数值天气及气候预报相结合,面临巨大的计算量。近年来GPU技术的进步使普通电脑能够实现高效而又廉价的并行计算。提出了资料插值、单元产流以及单元汇流采用GPU并行计算,马斯京根法河道汇流采用一种非并行的递归方法。基于笔记本电脑和NVIDIA GPU/CUDA结合C#语言,由分布式新安江模型在沂河流域的模拟应用表明,降水量空间插值及新安江产流的并行执行效率为普通CPU上C#的89倍。使用直接递归法实现马斯京根汇流演算比以往采用汇流次序表的执行效率提升0.50.9倍。

    Abstract:

    Distributed hydrological models have been applied in various watershed hydrological processes. They are often combined with numerical weather and climate prediction, which make them need enormous calculation. In recent years, the progress of GPU technology makes the ordinary computer to perform efficient and inexpensive parallel computing. This paper presented the GPU implementation of data interpolation, runoff generation and unit hydrograph calculation in parallel compution. A recursive non-parallel implementation of Muskingum river -routing method was also presented. Based on the common notebook computer with NVIDIA GPU/CUDA and C# language, the parallel simulation of rainfall-runoff process in the Yihe River Basin by the Xinanjiang Model based distributed hydrological model indicates that the performance of parallel execution of precipitation spatial interpolation and Xinanjiang discharge calculation has a speed of 8~9 times of that from a common CPU based C # execution. The recursive Muskingum method was also 0.5~0.9 times faster than the traditional calculation using a routing order table.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2013-10-12
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2022-06-21
  • 出版日期: