Designing Responsible AI Decision Tools for Uncertain Managerial Contexts
Abstract
This research addresses the critical gap in developing and validating Responsible AI Decision Tools (RAID-T) that support organizations in making high-impact managerial decisions under uncertainty while meeting governance standards and ethical expectations. Through Design Science Research (DSR) methodology, this work develops an operational framework that integrates explainability, uncertainty communication, and ethical controls into AI-mediated decision support systems.
The RAID-T framework operationalizes responsible AI principles across five critical dimensions: Responsibility, Auditability, Interpretability, Dependability, and Traceability. This comprehensive approach ensures that AI systems not only deliver technical performance but also align with organizational values, regulatory requirements, and societal expectations.
Research Aims
- Develop and validate RAID-T for high-stakes decisions under uncertainty
- Operationalize responsible AI principles in practical decision tools
- Integrate explainability, uncertainty communication, and ethical controls
- Bridge the gap between AI capabilities and governance requirements
- Provide actionable guidance for organizations deploying AI in critical contexts
Theoretical Contributions
1. Operational Framework
Development of an operational framework for Responsible AI in uncertain contexts, moving beyond abstract principles to concrete implementation guidance.
2. XAI Extension
Extension of Explainable AI (XAI) and Uncertainty theories in decision support, providing new approaches to communicating AI reasoning and confidence levels.
3. Human-AI Hybrids
Empirical advancement of Human-AI hybrid decision models, demonstrating how humans and AI can collaborate effectively in complex decision contexts.
4. Digital Governance
Applied integration of digital governance principles, showing how AI systems can be aligned with organizational and regulatory governance frameworks.
Methodology
Design Science Research (DSR) Approach
This research employs a rigorous Design Science Research methodology, following the six-stage development process established by Hevner et al. (2004). The DSR approach enables the creation of innovative artifacts (the RAID-T framework) while ensuring scientific rigor through systematic evaluation and refinement.
Six-Stage Development Process
- Problem Identification: Analysis of governance gaps in current AI systems
- Objectives Definition: Establishing requirements for responsible AI tools
- Design & Development: Creating the RAID-T framework and evaluation metrics
- Demonstration: Applying framework across 14 domain use cases
- Evaluation: Mixed-methods validation with quantitative and qualitative assessment
- Communication: Dissemination through academic publications and tools
Validation Approach
The framework undergoes comprehensive validation through:
- Mixed-methods evaluation combining quantitative metrics and qualitative expert feedback
- Simulation-based testing in business environments across multiple sectors
- Comparative analysis with existing governance frameworks
- Iterative refinement based on stakeholder input
Key Literature
Responsible AI and 'dark side' of AI - foundational work on AI risks
Digital governance mechanisms - framework for organizational AI governance
Human-AI hybrids - theoretical foundation for collaborative decision-making
Explainable AI taxonomy - comprehensive XAI classification
Uncertainty as transparency - novel approach to AI confidence communication
Misinformation and decision fragility - risks in AI-mediated decisions
Research Timeline
Foundations & Requirements
Literature review, stakeholder analysis, framework conceptualization, initial requirement definition
Development & Testing
RAID-T framework development, prototype implementation, pilot testing across initial domains
Evaluation & Dissemination
Comprehensive evaluation, refinement based on feedback, publication preparation, tool release
Access Resources
Full Paper PDF
Download the complete research paper with all appendices
Presentation Slides
Academic presentation slides for conferences and seminars
Research Data
Access validation datasets and experimental results
RAID-T Toolkit
Implementation tools and evaluation templates
Source Code
Python implementation of RAID-T evaluation framework
Supplementary Materials
Additional documentation and supporting materials
Citation
Akeel, M. A. (2024). Designing Responsible AI Decision Tools for Uncertain Managerial Contexts (Version 3). PhD Research, University of Portsmouth, Faculty of Business and Law. Supervisors: Prof. Mark Xu, Dr. Salem Chakhar, Dr. M.A.S. Goraya.