Before deploying any AI system in an organizational context, a structured risk assessment should be completed. The purpose is not to produce paperwork — it is to identify harms that might occur, estimate their likelihood and severity, assign accountability for monitoring them, and design mitigation strategies before deployment rather than after incidents.
Why Pre-Deployment Assessment Matters
Post-deployment risk management is always more expensive than pre-deployment risk management. Once an AI system is embedded in operational processes, modifying or stopping it creates disruption, cost and political difficulty. The time to identify "this system could produce discriminatory outputs in approximately 15% of cases" is before deployment, not after those outputs have affected real people.
The Assessment Template
Section 1 — System Description: What does this system do? What decisions does it inform or automate? Who are the users? Who is affected by its outputs?
Section 2 — Data Assessment: What data does this system use? What is the quality and representativeness of that data? Are there known gaps or biases? How current is the data?
Section 3 — Harm Identification: What are the possible harms if the system produces incorrect outputs? Who would be affected? Are certain populations more likely to be harmed by errors?
Section 4 — Probability Assessment: What is the estimated error rate under realistic conditions? How were those estimates validated? Are there conditions under which error rates would be higher?
Section 5 — Accountability Assignment: Who is responsible for monitoring this system's performance? What is the escalation path when problems are detected? Who has authority to suspend the system?
Section 6 — Mitigation Plan: What human review processes exist for high-stakes decisions? What monitoring will detect performance degradation? What triggers a full system review?
See the AI governance checklist and AI governance framework for complementary guidance.