Understanding model behaviour is a key step in building better models. Model identifiability investigates how well the available data informs the values of the parameters (and more broadly the model structure), indicating how confident we can be in the adopted values and the resulting model predictions. This overlaps with uncertainty assessment which largely looks at the impacts of uncertainty in model parameter values, structure and applicability on model predictions. Tools which assist in evaluating model behaviour include a range of sensitivity analysis methods (e.g. Sobol', Morris, FAST, VARS, ...), modelling response surfaces, methods for developing surrogate models (e.g. Active Subspaces, Polynomial Chaos Expansion), investigating parameter uncertainty (e.g. Bayesian inference methods). While there is a broad range of tools currently available for undertaking such studies, these tools tend not be used in a majority of modelling applications. We invite papers which present new approaches to addressing these problems, compare different techniques or apply such tools to understanding behaviour of a particular model. Contributions which focus more on the impact of uncertainty on decision making may prefer to submit to the session on "Handling uncertainty, trust and model accuracy to resolve contested decisions".