- AI: The New Frontier
- Posts
- AI Governance at the Speed of Change
AI Governance at the Speed of Change
Why Literacy Still Matters
So, this is an expanded version of my response to a post by Oliver Patel on LinkedIn, concerning the challenges faced by AI governance specialists.
The Main Event
AI governance is entering a phase where the difficulty is no longer primarily legal or technical, but organisational. Capabilities are evolving faster than governance frameworks can be revised, and small design or deployment decisions can now have outsized, non‑linear consequences.
This challenge is not unique to AI. More than two decades ago, similar dynamics were observed in operational and strategic intelligence. In “The Wiki and the Blog: Toward a Complex Adaptive Intelligence Community”, it was argued that large, centrally planned reorganisations consistently fail in fast‑moving environments. When change accelerates, the organisation itself must become adaptive.
The key insight from that work was simple but powerful: autonomy only works when it is supported by strong tradecraft. Shared standards, common methods, and a baseline level of competence create trust and allow decisions to be made locally without constant central intervention.
AI governance faces the same structural tension today. Organisations are deliberately decentralising AI through no‑code and low‑code platforms, internal assistants, and increasingly autonomous agents. This enables scale and productivity, but it also means that many risk‑relevant decisions are now made far from central governance teams.
Against this backdrop, proposals to downgrade Article 4 of the EU AI Act; reframing AI literacy as a recommendation rather than a requirement; are understandable from a regulatory simplification perspective. However, they do not materially change the operational dependency on human competence.
Many obligations within the AI Act, including risk classification, human oversight, incident response, and post‑deployment monitoring, cannot be meaningfully fulfilled unless staff understand how AI systems behave, where they fail, and how risk manifests in real workflows. These are not abstract policy concerns; they arise in procurement decisions, HR processes, legal assessments, and day‑to‑day operational use.
This does not imply that every employee needs deep technical expertise. A more realistic model is layered AI literacy: a shared baseline of awareness for all staff; deeper competence for senior leadership and control functions such as HR and legal; and specialist tradecraft for technical and governance roles. Without this foundation, tools and policies are forced to compensate for gaps they were never designed to cover.
Training alone is not sufficient. Effective AI governance also depends on guardrails, monitoring, assurance, and clear decision rights. But without sufficient literacy, these mechanisms fail to operate as intended. Autonomy without competence does not create adaptive governance, it amplifies risk.
Regulatory simplification has value. But simplifying legal text does not remove the need for organisations to invest in the human capability required to operate safely in a high‑velocity, non‑linear AI environment. AI literacy, therefore, is not an aspirational extra. It is an operational prerequisite for governance that can keep up.
Stay informed, stay critical, and wherever possible - stay ahead.
Regards
Tom Carter