Leveraging parallel computation to analyze landscape evolution

March 14, 2019

This new article by BIDS Data Science Fellow Richard Barnes introduces new ways of running hydrological models quickly by leveraging specialized processors called GPUs (graphics processing units).  Computers have been getting faster for years, but the way in which the new high-performance computers are getting faster has fundamentally changed. For instance, GPUs can be difficult to program and some types of problems do not work well on them. 

Hydrological models can be used over short time scales to determine flood hazards during flash floods or storm surges. Over long time scales, the flow of water shapes landscapes, but there are many theories as to how it does so.

One way to differentiate such theories is to build models representing them and then run these models on artificial or actual landscapes to determine which theory gives the most realistic results. But this problem can be more challenging than it first may seem. Say, for instance, that you think that the slope of a river, the amount of water flowing in it, the hardness of the river bottom, and the acidity of the water all have important effects on erosion. If each of these four variables can be weighted in five different ways, then there are 625 models to test. Incorporating other variables or randomness can quickly lead to thousands of models.

In this new article, Richard shows how to run a new class of problems (hydrologic models) on GPUs and how to transition easily from traditional computing methods, making it easier for any scientist working in this area to take advantage of the latest advances in computing.

Accelerating a fluvial incision and landscape evolution model with parallelism
January 10, 2019  |  Richard Barnes  |  Geomorphology
Open access through arrive.org: https://arxiv.org/abs/1803.02977.



Featured Fellows

Richard Barnes

Energy & Resources Group, EECS
BIDS Alum – Data Science Fellow