-- AI decision gate
AI moves faster than accountability. The gate slows that gap.
Before deploying any AI system, every team must be able to answer all of the following with specificity:
-> Why are we using AI here, and not a simpler approach?
-> What decisions does this AI system make on behalf of users?
-> Is the AI's decision visible and explainable to the person it affects?
-> Can the outcome be corrected if the AI is wrong or biased?
-> Who — by name — is responsible when this AI causes harm?
-> Has the system been tested against adversarial and edge inputs?
-> Is there a human override at every critical decision point?
Before deploying any AI system, every team must be able to answer all of the following with specificity:
No named human decision owner CRITICAL
Decision outcome is non-reversible CRITICAL
Decision invisible to the user it affects CRITICAL
No human override path exists HIGH
No adversarial or bias testing HIGH
Training data undocumented or unreviewed HIGH
Scope creep expected post-deployment MEDIUM
"Responsibility must remain with a human — always. Regardless of how much the AI contributes to the decision, the gate does not pass to the model."