The Risk-Based Framework Emerging Across Jurisdictions
The regulatory approach emerging globally is risk-based: the requirements placed on an AI system depend on the risk it poses. High-risk applications (credit decisions, employment screening, medical diagnosis, law enforcement) face stricter requirements than low-risk applications (spam filters, AI assistants, content recommendation).
The practical implication: before building an AI system, classify its risk level. High-risk classifications trigger requirements around: transparency, human oversight, accuracy documentation, bias testing, data governance, and incident reporting. These are not compliance burdens to be minimised — they are architectural requirements to be designed in.