Abstract
As machine learning systems increasingly inform critical decisions in domains such as healthcare, finance, and risk management, understanding not only their predictive reliability but also the uncertainty surrounding their explanations becomes essential. This thesis investigates how different forms of uncertainty affect the interpretability and use of machine learning models in decision-making contexts. The research integrates analytical and behavioral perspectives across three studies. The first develops a methodological framework based on rank aggregation to reconcile divergent feature importance explanations arising from model multiplicity. The second proposes a decision framework to assist analysts in managing disagreement among models, translating analytical scenarios into practical strategies for model selection and interpretation. The third examines, through two behavioral experiments, how predictive uncertainty influences human reliance on algorithmic and hybrid advice. Overall, the thesis contributes to bridging the gap between explainable artificial intelligence and decision-making under uncertainty.