Increasing number of publications on environmental applications of artificial intelligence and machine learning techniques clearly demonstrates a wide acceptance of these modelling tools by the research communities. However, the black box nature of created models hides the ways the results were generated and allows for very limited, if any, explanation and generalization of these results. To bridge this gap, explanation components can be integrated into ML techniques. The session invites original contributions on pre-modelling explainability, explainable modelling, or post-modelling explainability approaches and techniques for advanced analysis of environmental data. The techniques include, but are not limited to, exploratory data analysis, dataset summarization, explainable feature engineering, hybrid models and joint prediction and explanation, backward propagation and proxy models, intelligent data analysis and its combination with process-based simulations, exploratory and confirmatory analysis. Hybrid frameworks and techniques, success stories of their application and lessons learned are also welcomed.
Key topics: Explainable modelling, Hybrid data analysis, Machine learning, Environmental sustainability