Accelerating a fluvial incision and landscape evolution model with parallelism

Richard Barnes

Geomorphology
January 10, 2019

Solving inverse problems, performing sensitivity analyses, and achieving statistical rigour in landscape evolution models requires running many model realizations. Parallel computation is necessary to achieve this in a reasonable time. However, no previous landscape evolution algorithm is able to leverage modern parallelism. Here, I describe an algorithm that can utilize the parallel potential of GPUs and many-core processors, in addition to working well in serial. The new algorithm runs 43x faster (70s vs. 3000s on a 10,000 x 10,000 input) than the previous state of the art and exhibits sublinear scaling with input size. I also identify key techniques for multiple flow direction routing and quickly eliminating landscape depressions and local minima. Complete, well-commented, easily adaptable source code for all versions of the algorithm is available on Github and Zenodo.

Open access through arrive.org: https://arxiv.org/abs/1803.02977.



Featured Fellows

Richard Barnes

Energy & Resources Group, EECS
BIDS Alum – Data Science Fellow