The RAID-T Framework
A comprehensive evaluation system for responsible AI governance across five critical dimensions
Responsibility
Ensuring ethical alignment, bias mitigation, and accountability in AI decision-making
Auditability
Comprehensive logging and documentation for regulatory compliance and system review
Interpretability
Making AI decisions understandable to stakeholders at all levels
RAID-T Performance Visualization
Understanding the Radar Chart
The RAID-T radar chart provides a visual assessment of AI system performance across all five governance dimensions. Each axis represents one dimension, with scores from 0 (center) to 5 (outer edge).
Our research has shown that systems achieving scores above 4.0 in all dimensions demonstrate strong governance readiness and regulatory compliance alignment.
Validated Research Domains
The RAID-T framework is domain-agnostic but has been extensively tested across 14 critical sectors. Each domain presents unique challenges and requirements for responsible AI governance.
Healthcare
Clinical note summarization, diagnostic support, treatment recommendations
View Details →Finance
Credit scoring, risk assessment, fraud detection, regulatory compliance
View Details →Education
Adaptive learning, automated grading, personalized feedback systems
View Details →Law
Case analysis, legal reasoning, precedent matching, contract review
View Details →Environment
Climate modeling, sustainability assessment, resource optimization
View Details →Public Policy
Policy analysis, impact assessment, stakeholder engagement
View Details →Crisis Management
Emergency response, resource allocation, risk prediction
View Details →Supply Chain
Logistics optimization, demand forecasting, inventory management
View Details →Cybersecurity
Threat detection, vulnerability assessment, incident response
View Details →Knowledge Generation
Research synthesis, hypothesis generation, literature mining
View Details →Productivity
Task automation, workflow optimization, decision support
View Details →Creativity
Content generation, design assistance, creative collaboration
View Details →Research & Development
Scientific discovery, experimental design, innovation support
View Details →Planning
Strategic planning, resource allocation, scenario modeling
View Details →Evaluation Methods
Four primary methods tested across multiple models and deployment scenarios
Prompt Engineering
Baseline MethodZero-shot, few-shot, and chain-of-thought prompting strategies for behavioral influence without model modification
- No training required
- Immediate deployment
- High transparency
LoRA/PEFT
Fine-tuningParameter-efficient fine-tuning for domain-specific adaptation with minimal computational overhead
- Domain specialization
- Resource efficient
- Preserves base capabilities
RAG
AugmentationRetrieval-Augmented Generation for grounding outputs in verified knowledge sources
- Source attribution
- Reduced hallucination
- Dynamic knowledge updates
RLHF
AlignmentReinforcement Learning from Human Feedback for preference alignment and safety
- Human preference alignment
- Safety optimization
- Iterative improvement
Method Effectiveness Across RAID-T Dimensions
Research Publications
Academic papers, frameworks, and documentation from the TrustGenAI research project
Designing Responsible AI Decision Tools for Uncertain Managerial Contexts
The main PhD research paper presenting the comprehensive RAID-T framework for developing and validating Responsible AI Decision Tools that support organizations in making high-impact managerial decisions under uncertainty while meeting governance standards and ethical expectations.
Taxonomy of AI for Decision-Support and Governance
A comprehensive taxonomy for classifying and assessing AI systems across five foundational dimensions, developed through Design Science Research methodology.
Generative AI: A Responsible AI Framework
Structured evaluation of prominent methods to influence generative AI systems including prompt engineering, fine-tuning, RLHF, and RAG.
Research Collaboration
Transform Your AI Systems with Research-Backed Governance
Partner with the TrustGenAI Research Centre to validate and enhance your AI deployments through our academically rigorous RAID-T framework
Public Sector
AI Accountability in Government
Collaborate on research to ensure public AI systems meet transparency requirements and serve citizens ethically
Learn More →Academic Institutions
Advance Research Together
Join our research network to advance responsible AI through collaborative studies and shared insights
Learn More →Enterprise
De-risk AI Investments
Participate in research studies to validate enterprise AI systems against comprehensive governance criteria
Learn More →Healthcare
Deploy Trustworthy Clinical AI
Contribute to research ensuring AI in healthcare meets the highest standards of safety and ethics
Learn More →About the Research
Lead Researcher
Mohammad Ali Akeel
PhD Researcher
University of Portsmouth
Faculty of Business and Law
Research Focus
Developing operational frameworks for responsible AI in uncertain decision contexts
- AI Governance
- Explainable AI
- Decision Support Systems
Supervisory Team
Professor Mark Xu
Director, CORL
Dr Salem Chakhar
Senior Research Fellow
Dr M.A.S. Goraya
Senior Lecturer
Compliance Standards
- EU AI Act
- ISO/IEC 42001
- NIST AI RMF
- GDPR & HIPAA
Research Impact
Cite This Research
Akeel, M. A. (2025). Responsible AI Taxonomy Research Centre. University of Portsmouth, Faculty of Business and Law. Available at: https://trustgenai.org