"Ensuring outputs are clinically and ethically appropriate for ageing populations, reflecting societal values of equity, inclusivity, and wellbeing."— 50-Responsible_Generative_AI_RAIDT_V10.docx, Section 3.2
Evaluation Criteria
- Domain Appropriateness: Outputs must reflect professional standards and domain-specific requirements
- Ethical Alignment: Adherence to ethical norms, professional codes, and societal values
- Vulnerable Population Protection: Special attention to older adults, minorities, and at-risk groups
- Equity and Fairness: Non-discriminatory recommendations and unbiased decision-making
- Clinical/Technical Accuracy: Outputs must be factually correct and professionally sound
- Cultural Sensitivity: Respect for diverse contexts and cultural norms
Research Findings
Method Performance Analysis
RAG
High thematic fidelity with trusted retrieval sources
- Best for policy and legal domains
- Strong source attribution
- Paired with CCRA3, ECtHR corpora
PEFT
Domain-specific training reduces drift by 34%
- Reduced domain drift in finance/healthcare
- Adapter versioning support
- Strong RAID-T compliance
RLHF
Tone and value alignment
- Risk surfacing in clinical settings
- Reward-linked summaries
- Effective for retail and finance tone
Prompting
Lightweight but limited
- Effective for constrained tasks
- Fails in high-risk settings
- Needs moderation/scaffolding
"Influence methods such as RLHF and PEFT consistently supported domain-aligned outputs, especially when trained on domain-specific datasets. In finance and healthcare, PEFT helped reduce domain drift by 34% compared to prompt engineering alone." — 02-Generative_AI_V10.docx, Section 6.2.1
Governance Standards Alignment
EU AI Act Article 9
Requires risk management and human oversight for high-risk systems, including healthcare
European Commission, 2024ISO/IEC 42001
Emphasizes organizational accountability and ethical governance structures
ISO/IEC, 2025Philosophical Foundations
Scholars such as Floridi et al. (2018) and Mittelstadt et al. (2019) have laid a philosophical foundation for RAI, articulating the need for principles such as explicability, fairness, and respect for human dignity to be embedded throughout the AI lifecycle.
AI Governance Mechanisms
- Ethics review boards and cross-disciplinary advisory panels
- Model documentation protocols (e.g., model cards, data statements)
- Algorithmic risk assessments and socio-technical impact evaluations
- Technical standards and certifications (e.g., ISO/IEC 42001, NIST AI RMF)
- Public regulations, such as the EU Artificial Intelligence Act and UK regulatory proposals
"Responsible AI demands a shift from techno-solutionism to socio-technical systems thinking, recognising that algorithmic impacts are always contextual and political." — AI Now Institute, 2021
AI Lifecycle Management Framework
- Data phase: Ethical data collection, anonymisation, fairness in representation
- Model phase: Bias detection, robustness checks, interpretable architecture design
- Deployment phase: Monitoring, red-teaming, feedback loops, retraining protocols
- Decommissioning phase: Sunset procedures, data retention audits, traceability
A Responsible AI ecosystem encompasses a set of principles and practices designed to ensure that AI systems are aligned with ethical values, social norms, and legal expectations.
"Healthcare was chosen as a critical test domain because clinical decisions are time-sensitive, data-dense, and often life-critical. Generative AI used in this space must be evaluated not only on fluency, but on factual consistency, medical relevance, and accountability readiness." — 02-Generative_AI_V10.docx, Section 6.3.1
Key Healthcare Results
- Clinical accuracy scores: 4.6-4.8/5.0 across methods
- Red flag surfacing significantly improved with RLHF
- Domain-specific medical alignment strong with PEFT
- RAG + PEFT combination best for hallucination reduction
"By embedding RAID-T into empirical evaluation, we operationalise abstract governance principles into measurable criteria. Whereas frameworks such as the EU AI Act and ISO/IEC 42001 articulate requirements at regulatory or organisational levels, RAID-T translates these into task-level dimensions that can be applied directly to AI outputs." — 50-Responsible_Generative_AI_RAIDT_V10.docx, Section 6.1
The study advances theorisation on influence methods by demonstrating that stacked influence methods deliver superior RAID-T alignment, suggesting that multi-method orchestration represents a stronger model of responsible AI. This supports emerging debates on "AI controllability" and "steerability," positioning influence methods as governance mechanisms rather than mere optimisation techniques.
Research Proposition
"P1: Responsibility. Stacked influence methods (e.g., LoRA + RAG + prompting) will generate outputs with higher clinical and ethical appropriateness than prompt-only baselines." — 50-Responsible_Generative_AI_RAIDT_V10.docx, Section 3.8