🏥 The world's first comprehensive regulatory compliance framework for autonomous AI agents in healthcare environments
The Task Force for AI Agents in Healthcare presents the world's first unified, lifecycle-centric regulatory framework specifically designed for autonomous AI agents in medical environments. Our framework synthesizes the most effective elements from global regulatory bodies—FDA, EU AI Act, Health Canada, UK MHRA, NIST, and WHO—into a comprehensive standard that addresses the unique risks and opportunities of agentic AI systems.
Unlike traditional AI/ML models that provide predictions, AI agents take autonomous actions. This fundamental difference requires a new regulatory paradigm focused on continuous governance, traceability, and human oversight.
AI agents are autonomous, goal-directed systems that:
- 🔄 Autonomous Decision-Making: Make independent choices within defined parameters
- 🎯 Goal-Oriented Behavior: Plan and execute sequences of actions to achieve specific objectives
- 🔌 Active Integration: Direct interaction with healthcare systems (EHRs, diagnostic tools, medical devices)
- 📈 Adaptive Learning: Continuous learning and evolution from real-world clinical data
- 🤝 Human-AI Collaboration: Augment human capabilities while maintaining human oversight
Traditional AI: Input → Prediction → Human Decision
AI Agents: Input → Autonomous Planning → Direct Action → Continuous Learning
Based on comprehensive analysis of global regulatory approaches and insights from our Healthcare AI Agents Working Group meetings with FDA industry committee stakeholders, our framework establishes eight comprehensive verification categories for AI agent governance in healthcare, with recent FDA priority enhancements addressing tool-enabled agent regulatory pathways:
Enhanced FDA-EU Model with SaMD Classification & 510(k) Pathway
- SaMD Risk Mapping: Four-tier classification system (Class I-IV) based on healthcare decision state and situation severity
- FDA 510(k) Predicate Analysis: Comprehensive substantial equivalence determination for clearance pathway
- Enhanced PCCP: FDA-style Predetermined Change Control Plans with clinical evidence requirements
- Clinical Evidence Scaling: Risk-based evidence requirements from analytical validation to randomized controlled trials
- Continuous Monitoring: Real-time surveillance for model drift and performance degradation
- Regulatory Sandboxes: UK MHRA-style testing environments for real-world evidence generation
Complete Lifecycle Documentation & Accountability
- Data Lineage: Full provenance tracking with Health Canada SGBA+ diversity analysis
- Model Lineage: Comprehensive documentation of algorithms, training, and updates
- Decision Lineage: Explainable AI audit trails for every agent output
- Machine-Interpretable: Automated compliance verification and regulatory reporting
OWASP AISVS-Aligned AI-Specific Threat Protection
- AISVS Primary Reference: Organizations align with OWASP Artificial Intelligence Security Verification Standard (AISVS)
- Adversarial Robustness: Mandatory testing against prompt injection and evasion attacks
- Supply Chain Integrity: Full vetting of pre-trained models and dependencies
- Real-Time Threat Monitoring: Continuous anomaly detection for agent behavior
- Healthcare-Specific Adaptations: AISVS requirements adapted for clinical environments
Clinical Augmentation with Accountability
- Clear Accountability: Defined human responsibility chains throughout agent lifecycle
- Clinical Oversight: Human validation required for all critical healthcare decisions
- Human Factors Engineering: Mandatory usability testing to prevent automation bias
- Transparency Requirements: Agent capabilities and limitations clearly communicated to users
Comprehensive Agent Visibility & Control
- Healthcare Agent Cataloging: Centralized registries for all AI agents in clinical environments
- Identity Verification: Cryptographic authentication and healthcare IAM integration
- Lifecycle Management: Complete tracking from deployment to decommissioning
- Behavioral Monitoring: Anomaly detection and security event correlation
Progressive Autonomy with Safety Boundaries
- Autonomy Classification: Standardized levels from co-pilot to full autonomous operation
- Progressive Implementation: Safety gates and clinical validation for autonomy advancement
- Human-Agent Authority: Clear decision boundaries and emergency override capabilities
- Multi-Agent Coordination: Governance for coordinated autonomous agent systems
Fairness Across All Patient Populations
- Demographic Representativeness: Comprehensive training data diversity requirements
- Algorithmic Bias Detection: Multi-metric fairness assessment and intersectional analysis
- Vulnerable Population Protection: Enhanced safeguards for at-risk patient groups
- Global Health Equity: Considerations for resource-limited and diverse healthcare settings
🔧 C8: Tool Use & Integration Security ⭐ FDA Priority Enhancement
Regulatory Compliance for Tool-Enabled Agents
- Tool Authorization Controls: Role-based access with clinical appropriateness validation
- Medical Device Integration: FDA-compliant integration without altering device classification
- Clinical Tool Selection: Evidence-based tool selection with contraindication checking
- Cascading Failure Prevention: Circuit breakers and isolation mechanisms
- Regulatory Pathway Analysis: Tool combination impact on FDA submission requirements
- Quality System Integration: Manufacturing controls for tool-enabled agent development
HAARF provides comprehensive mappings to major regulatory frameworks:
| Framework | Coverage | Key Benefits |
|---|---|---|
| FDA Digital Health | 84% | 510(k) pathway, PCCP templates, breakthrough device guidance |
| EU AI Act | 71% | High-risk classification, human oversight, transparency |
| Health Canada SGBA+ | 67% | Comprehensive equity analysis, vulnerable populations |
| UK MHRA | 60% | AI Airlock integration, real-world evidence |
| NIST AI RMF | 88% | Complete 4-function mapping (Govern/Map/Measure/Manage) |
| OWASP AISVS | 56% | Primary cybersecurity reference with healthcare adaptations |
| WHO GI-AI4H | 48% | Global health equity, ethics alignment |
| ISO/IEC 42001 | 71% | AI management system certification |
| IMDRF GMLP | 72% | International harmonization foundation |
Multi-Jurisdictional Value: Single HAARF compliance supports regulatory submissions across multiple jurisdictions, reducing regulatory burden and enabling global deployment.
├── HAARF/
│ └── 1.0/
│ ├── en/ # Complete Framework (English)
│ │ ├── 0x10-C01-Risk-Lifecycle-Assessment.md
│ │ ├── 0x10-C02-Model-Passport-Traceability.md
│ │ ├── 0x10-C03-Cybersecurity-Framework.md
│ │ ├── 0x10-C04-Human-Oversight.md
│ │ ├── 0x10-C05-Agent-Registration-Identity.md
│ │ ├── 0x10-C06-Autonomy-Governance.md
│ │ ├── 0x10-C07-Bias-Equity-Population.md
│ │ ├── 0x10-C08-Tool-Use-Integration.md # FDA Priority ⭐
│ │ ├── 0x90-Appendix-A_Glossary.md
│ │ └── Categories.md
│ ├── mappings/ # Regulatory Framework Mappings
│ │ ├── fda-mapping.json
│ │ ├── eu-ai-act-mapping.json
│ │ ├── health-canada-mapping.json
│ │ ├── uk-mhra-mapping.json
│ │ ├── nist-ai-rmf-mapping.json
│ │ ├── owasp-aisvs-mapping.json
│ │ ├── who-gi-ai4h-mapping.json
│ │ ├── iso-iec-42001-mapping.json
│ │ ├── imdrf-gmlp-mapping.json
│ │ └── README.md
│ └── tools/ # Assessment and Validation Tools
- Risk Assessment: Start with C1 Risk Classification
- Security Foundation: Implement C3 Cybersecurity Framework
- Human Oversight: Establish C4 Clinical Integration
- Tool-Enabled Agents: Apply C8 Tool Use Security for autonomous systems
- Framework Overview: Read Categories.md
- Implementation Guide: Follow Using HAARF
- Regulatory Mapping: Review appropriate jurisdiction mapping
- Compliance Verification: Implement verification requirements by risk level
- Regulatory Mappings: Explore comprehensive framework mappings
- Submission Guidance: Use mapping evidence for regulatory submissions
- Multi-Jurisdictional Strategy: Leverage harmonized compliance approach
- FDA Priority: Focus on C8 Tool Use for tool-enabled agents
C8: Tool Use & Integration Security addresses FDA's highest priority feedback:
"The enhanced HAARF framework with C8 (Tool Use) represents the most comprehensive guidance available for medical device companies developing AI agents. It directly addresses the unique regulatory challenges posed by autonomous systems that can access and control multiple healthcare tools."
Regulatory Value:
- Breakthrough Device Pathway: Guidance for novel tool-enabled capabilities
- 510(k) Submissions: Tool integration without device classification alteration
- Quality Systems: Manufacturing controls for tool-enabled agents
- Post-Market Surveillance: Tool usage monitoring and adverse event reporting
HAARF provides three implementation levels scaled to risk:
- Level 1 (85 requirements): Foundational controls for basic healthcare AI agents
- Level 2 (144 requirements): Standard implementation for moderate-risk agents
- Level 3 (50 requirements): Advanced controls for high-risk autonomous agents
Risk-Based Approach: Higher risk agents require higher verification levels, optimizing regulatory burden while ensuring safety.
We welcome contributions from healthcare professionals, AI developers, regulatory experts, and researchers:
- Issues: Report bugs, suggest improvements, request clarifications
- Pull Requests: Submit enhancements, corrections, and new content
- Discussions: Engage in framework development and implementation discussions
- Regulatory Feedback: Provide insights from regulatory submissions and approvals
All Inquiries: [email protected]
Technical support, regulatory guidance, security issues, and partnership inquiries
This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.
HAARF is designed for regulatory submissions and compliance across multiple jurisdictions while maintaining open source principles for global healthcare benefit.
HAARF is built upon the excellent foundational work of the OWASP Artificial Intelligence Security Verification Standard (AISVS) project, led by Jim Manico and Russ Memisyazici.
The OWASP AISVS team created a comprehensive 13-category verification structure covering:
- Training Data Governance & Bias Management
- Model Lifecycle Management & Change Control
- Infrastructure & Deployment Security
- Autonomous Orchestration & Agentic Action Security
- Adversarial Robustness & Attack Resistance
- Privacy Protection & Personal Data Management
- And 7 additional critical AI security categories
HAARF's Specialized Adaptation: We've adapted and extended the OWASP AISVS framework specifically for healthcare environments, adding regulatory compliance requirements, clinical safety protocols, and international harmonization mappings while maintaining the rigorous verification approach that makes AISVS so effective.
Credit Where Due: The structured documentation format, categorized verification requirements, and comprehensive security focus that define HAARF all stem from the excellent foundation provided by the OWASP AISVS community. We encourage healthcare AI teams to implement both frameworks - OWASP AISVS for general AI security and HAARF for healthcare-specific regulatory compliance.
HAARF represents unprecedented collaboration between 40+ international experts from FDA, EMA, Health Canada, UK MHRA, WHO GI-AI4H, NIST AI RMF, OWASP AISVS, and ISO/IEC 42001 communities.
⭐ Star this repository to support the world's first comprehensive healthcare AI agent regulatory framework!
Building trust in the AI-augmented future of healthcare through rigorous regulatory compliance and global harmonization.




