To determine model skill, results are compared to appropriate observations using a variety of statistical methods, ranging from well-known metrics (e.g. r2, root-mean-square error [RMSE]) to more esoteric analyses (e.g. Willmott skill score, principal component analysis). Some of the simpler statistics may not be appropriate for determining model skill (e.g. RMSE for non-normally-distributed variables) while some of the more complex statistics may be difficult to communicate to a wider audience. As such, there is rarely consensus amongst modellers on appropriate metrics to include in reports and/or papers for a given set of variables. In this session on communicating model uncertainty, participants will discuss appropriate methods for quantifying a model’s skill, and the techniques used in communicating those methods to the wider stakeholder audience. It is hoped that this will encourage more widespread reporting of model skill metrics, and work towards an industry standard for comparison of model results against observations.