Overcoming long Bayesian run times in integrated fisheries stock assessments

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Cole C Monnahan, Trevor A Branch, James T Thorson, Ian J Stewart, Cody S Szuwalski
Journal/Conference Name ICES Journal of Marine Science
Paper Category
Paper Abstract Bayesian inference is an appealing alternative to maximum likelihood estimation, but estimation can be prohibitively long for integrated fisheries stock assessments. Here, we investigated potential causes of long run times including high dimensionality, complex model structure, and inefficient Bayesian algorithms for four US assessments written in AD Model Builder (ADMB), both custom built and Stock Synthesis models. The biggest culprit for long run times was overparameterization and they were reduced from months to days by adding priors and turning off estimation for poorly-informed parameters (i.e. regularization), especially for selectivity parameters. Thus, regularization is a necessary step in converting assessments from frequentist to Bayesian frameworks. We also tested the usefulness of the no-U-turn sampler (NUTS), a Bayesian algorithm recently added to ADMB, and the R package adnuts that allows for easy implementation of NUTS and parallel computation. These additions further reduced run times and better sampled posterior distributions than existing Bayesian algorithms in ADMB, and for both of these reasons we recommend using NUTS for inference. Between regularization, a faster algorithm, and parallel computation, we expect models to run 50–50 000 times faster for most current stock assessment models, opening the door to routine usage of Bayesian methods for management of fish stocks.
Date of publication 2019
Code Programming Language R
Comment

Copyright Researcher 2022