Use this checklist before deploying any AI system and as part of regular governance reviews of existing deployments. Each item is a concrete, verifiable governance criterion.
Before Deployment
- We have defined what decision or process this AI system is intended to support
- We have identified a named individual accountable for this system's performance
- We have assessed the quality, representativeness and recency of the data the system uses
- We have conducted a pre-deployment risk assessment identifying potential harms
- We have defined what human review process exists for high-stakes AI-assisted decisions
Transparency and Explainability
- People affected by AI-assisted decisions can be informed that AI was involved
- The system can provide a meaningful explanation for specific outputs when required
- We have documented what the system does and does not do in plain language
- Staff who use the system have received training on its capabilities and limitations
- We maintain an audit log of decisions made with AI assistance in high-stakes contexts
Ongoing Monitoring
- We have defined performance metrics and thresholds that trigger a review
- We monitor model performance on an ongoing basis (not just at deployment)
- We have a process for investigating when the system produces unexpected or harmful outputs
- We review performance disaggregated by relevant subgroups (not just overall accuracy)
- We have a defined schedule for model review and re-validation
Vendor and Procurement
- We have asked vendors how their models are trained, validated and monitored
- We have confirmed what data the vendor uses to train models and whether it includes our data
- Our contract with the vendor includes provisions for performance accountability
- We know how the vendor handles model incidents and what remedies exist
- We have assessed whether switching vendors would result in loss of model history or data
Related: See the full AI Governance Framework guide and the Responsible AI Procurement Checklist.