Primary PhD Research

Designing Responsible AI Decision Tools for Uncertain Managerial Contexts

Author: Mohammad Ali Akeel
Institution: University of Portsmouth, Faculty of Business and Law
Version: 3.0
Year: 2024

Abstract

This research addresses the critical gap in developing and validating Responsible AI Decision Tools (RAID-T) that support organizations in making high-impact managerial decisions under uncertainty while meeting governance standards and ethical expectations. Through Design Science Research (DSR) methodology, this work develops an operational framework that integrates explainability, uncertainty communication, and ethical controls into AI-mediated decision support systems.

The RAID-T framework operationalizes responsible AI principles across five critical dimensions: Responsibility, Auditability, Interpretability, Dependability, and Traceability. This comprehensive approach ensures that AI systems not only deliver technical performance but also align with organizational values, regulatory requirements, and societal expectations.

Research Aims

  • Develop and validate RAID-T for high-stakes decisions under uncertainty
  • Operationalize responsible AI principles in practical decision tools
  • Integrate explainability, uncertainty communication, and ethical controls
  • Bridge the gap between AI capabilities and governance requirements
  • Provide actionable guidance for organizations deploying AI in critical contexts

Theoretical Contributions

1. Operational Framework

Development of an operational framework for Responsible AI in uncertain contexts, moving beyond abstract principles to concrete implementation guidance.

2. XAI Extension

Extension of Explainable AI (XAI) and Uncertainty theories in decision support, providing new approaches to communicating AI reasoning and confidence levels.

3. Human-AI Hybrids

Empirical advancement of Human-AI hybrid decision models, demonstrating how humans and AI can collaborate effectively in complex decision contexts.

4. Digital Governance

Applied integration of digital governance principles, showing how AI systems can be aligned with organizational and regulatory governance frameworks.

Methodology

Design Science Research (DSR) Approach

This research employs a rigorous Design Science Research methodology, following the six-stage development process established by Hevner et al. (2004). The DSR approach enables the creation of innovative artifacts (the RAID-T framework) while ensuring scientific rigor through systematic evaluation and refinement.

Six-Stage Development Process

  1. Problem Identification: Analysis of governance gaps in current AI systems
  2. Objectives Definition: Establishing requirements for responsible AI tools
  3. Design & Development: Creating the RAID-T framework and evaluation metrics
  4. Demonstration: Applying framework across 14 domain use cases
  5. Evaluation: Mixed-methods validation with quantitative and qualitative assessment
  6. Communication: Dissemination through academic publications and tools

Validation Approach

The framework undergoes comprehensive validation through:

  • Mixed-methods evaluation combining quantitative metrics and qualitative expert feedback
  • Simulation-based testing in business environments across multiple sectors
  • Comparative analysis with existing governance frameworks
  • Iterative refinement based on stakeholder input

Key Literature

Mikalef et al. (2022)

Responsible AI and 'dark side' of AI - foundational work on AI risks

Vaia et al. (2022)

Digital governance mechanisms - framework for organizational AI governance

Rai et al. (2019)

Human-AI hybrids - theoretical foundation for collaborative decision-making

Barredo Arrieta et al. (2020)

Explainable AI taxonomy - comprehensive XAI classification

Bhatt et al. (2020)

Uncertainty as transparency - novel approach to AI confidence communication

Petratos (2021)

Misinformation and decision fragility - risks in AI-mediated decisions

Research Timeline

Year 1

Foundations & Requirements

Literature review, stakeholder analysis, framework conceptualization, initial requirement definition

Year 2

Development & Testing

RAID-T framework development, prototype implementation, pilot testing across initial domains

Year 3

Evaluation & Dissemination

Comprehensive evaluation, refinement based on feedback, publication preparation, tool release

Access Resources

Citation

Akeel, M. A. (2024). Designing Responsible AI Decision Tools for Uncertain Managerial Contexts (Version 3). PhD Research, University of Portsmouth, Faculty of Business and Law. Supervisors: Prof. Mark Xu, Dr. Salem Chakhar, Dr. M.A.S. Goraya.

Related Publications