< Back to program

An Inherently Interpretable Explainable Artificial Intelligence Model for Project Risk Assessment

October 31, 01:10 pm - 01:50 pm AEST

Project risk management has become an essential aspect of project management, involving risk prediction, analysis, planning, and control to mitigate potential adverse impacts on project outcomes. Developing effective project risk management systems can significantly reduce the costs associated with risk management by identifying and mitigating risks early, allocating resources more efficiently, supporting better decision-making, and preventing potential pitfalls. Despite the significant research in this field, most traditional risk management systems focus on managing large-scale data for risk prediction and manual statistical analysis, requiring human expertise in risk assessment and planning. In recent years, advancements in computing power and the need for real-time prediction and decision-making through analyzing large-scale data have led to a surge in the use of artificial intelligence (AI)-based models. As decisions provided by the AI models can substantially influence our socioeconomic life, recent legislation and ethical concerns have made it essential to ensure the credibility and rationality of the AI models. However, AI models employed in project risk management are inherently complex to be understood by the general stakeholders of a project. Providing a rational explanation of the decision-making process of the AI models can not only verify their ethical compliance and aid the stakeholders with valuable learnable insights. The development of suitable Explainable AI (XAI) models has become a prevalent area of study for different applications of AI. XAI models aim to increase transparency in the decision-making process of AI models by providing human-understandable explanations for the results produced. By increasing transparency for AI decisions, XAI models can help build trust in these systems and ensure they are used responsibly, fairly, and ethically.

This study emphasizes the importance of the explainability and interpretability of the AI model in project risk analysis. This study conducts a systematic literature review to identify the risk factors and uses the Interpretive Structural Modeling (ISM) method to establish a hierarchical network structure with risk paths that illustrate the risk interdependencies. Next, to improve the model's accuracy, correlations between factors are eliminated through the use of the maximal information coefficient (MIC) while retaining their significant meaning. Finally, the study introduces a modified DECMSA (DE variant with covariance matrix self-adaptation) for determining the Belief Rule Based (BRB) model's optimal combination of rule weights. The outcomes of this study offer a deeper understanding of risk interrelationships and crucial risk factors in high-rise building construction projects.

Lecturer And Program Coordinator-Master of Project Management, University of New South Wales
Hear from and network with

Project leaders and decision makers

See FAQs