Somewhere in your organisation — or in an organisation you advise — there is a slide deck titled something like AI Governance Framework: Proposed Committee Structure. It has a proposed membership of eleven to fifteen people. It has a proposed meeting cadence of monthly. It has a proposed mandate that includes "oversight of AI strategy," "ethical review of AI deployments," and "alignment with regulatory requirements." It was presented to the board. The board approved it. The committee was formed.
The committee has met four times. It has produced two sets of minutes and one draft policy. The AI has shipped six new models, three vendor integrations, and one significant incident that the committee did not know about until three weeks after it happened.
This is not a hypothetical. It is a composite of every AI governance committee we have reviewed in the past two years. The details vary. The pattern does not.
We are not against governance. We are against the committee as the primary instrument of governance. The distinction matters, because the committee is not governance — it is the appearance of governance. And the appearance of governance, in a regulated environment, is more dangerous than no governance at all, because it creates the impression that someone is watching when no one is.
The anatomy of a governance committee.
The AI ethics committee, in its standard form, is an advisory body. It has no budget authority. It has no veto power over deployments. It has no mechanism to enforce its recommendations. When it produces a finding, the finding goes to an executive who may or may not act on it, on a timeline that is not specified, with no accountability for the outcome.
This is not an accident. It is the design. The committee was not built to make decisions — it was built to provide cover. When the regulator asks "what oversight did you have over this AI system?", the answer is "we had a committee." The committee is a liability management instrument dressed as a governance instrument.
The problem is that liability management and governance are not the same thing. Governance is the set of rules, accountabilities, and decision rights that determine how AI systems are built, deployed, and monitored. Liability management is the set of artefacts that demonstrate, after the fact, that someone was paying attention. A committee that meets monthly, produces minutes, and has no enforcement authority is a liability management instrument. It is not governance.
The failure modes are structural, not personal.
The people on AI governance committees are, in our experience, thoughtful and well-intentioned. The failure is not personal — it is structural. The committee fails in four predictable ways.
The committee can recommend. It cannot decide. When the engineering team wants to deploy a new model, they do not need the committee's approval — they need their manager's approval. The committee is not in the deployment chain. It is adjacent to it. Adjacency is not oversight.
A twelve-person committee that meets monthly operates on a thirty-day decision cycle. The AI deployment cycle at most mid-market organisations is two to four weeks. The committee is structurally incapable of reviewing deployments before they happen. It can only review them after — which is not oversight, it is post-hoc documentation.
"Oversight of AI strategy" and "ethical review of AI deployments" are not actionable mandates. They are categories of concern. A committee with a broad mandate and no decision authority will spend its meetings discussing the categories of concern without resolving any of them. The minutes will be full of "the committee noted" and "the committee recommends." The AI will keep shipping.
This is the most dangerous failure mode. The committee's existence allows the organisation to believe that AI governance is happening. The board has been told that a committee exists. The board stops asking questions. The committee, meanwhile, is meeting monthly to discuss matters it cannot act on. The gap between the appearance of governance and the reality of it is invisible until something goes wrong.
What actually happened, and what it means.
In 2018, Amazon reportedly shut down an experimental AI hiring tool after discovering that it systematically downgraded resumes from women. The tool had been trained on historical hiring data that reflected a decade of male-dominated hiring patterns. The model learned to penalise resumes that included the word "women's" — as in "women's chess club" — and to downgrade graduates of all-women's colleges.
The lesson that is usually drawn from this case is about bias in training data. That lesson is correct. But there is a second lesson that is less often drawn: the tool was in use for approximately two years before the bias was identified and the tool was shut down. It was not identified by an oversight committee. It was identified by the team using it, who noticed the pattern in the outputs.
The Amazon case is not an argument for committees. It is an argument for the kind of continuous, operational monitoring that committees are not designed to do. The monitoring that would have caught this problem earlier was not a monthly meeting — it was a regular review of model outputs against demographic benchmarks, conducted by the people closest to the system, with a clear escalation path when anomalies appeared.
The committee is not the instrument for catching model drift, output bias, or deployment errors. The instrument for catching those things is operational monitoring — which is a cadence, not a meeting. — Field note, governance review
Three instruments that work.
Governance is not a committee. Governance is a set of rules, accountabilities, and decision rights. For AI systems, those rules, accountabilities, and decision rights need to be embedded in the deployment process — not adjacent to it.
The three instruments that work are: a named owner, a policy with teeth, and a cadence.
A named owner is a specific person — not a team, not a committee — who is accountable for a specific AI system. The named owner has the authority to approve changes to the system, the responsibility to review its outputs, and the accountability for its compliance with applicable policies. When something goes wrong, the named owner is the first call. This is not a burden — it is a clarity. Most people who operate AI systems want to know what they are responsible for. The named owner model gives them that clarity.
A policy with teeth is a policy that is enforced at the deployment gate. Before a new AI system can go into production, it must have a named owner, a registry entry, a risk tier, and a model card. These are not recommendations — they are requirements. The deployment does not happen without them. This is the difference between a policy that describes an operating model and a policy that is the operating model.
A cadence is a regular meeting — weekly, forty-five minutes, four people — that reviews what changed in the registry, what incidents occurred, and what decisions need to be made. The cadence is not a strategy meeting. It is not a vendor review. It is an operational review of the AI estate, conducted by the people who operate it, with the authority to act on what they find.
Accountability without bureaucracy.
The committee's appeal is that it distributes accountability across a group. The problem is that distributed accountability is, in practice, no accountability. When eleven people are responsible for something, no one is responsible for it.
The alternative is a RACI — a Responsible, Accountable, Consulted, Informed matrix — that assigns clear roles for each AI system and each governance function. For a mid-market organisation, the RACI for AI governance is simple.
For each AI system: one person is Responsible (the named owner), one person is Accountable (the executive sponsor), and a small number of people are Consulted (legal, risk, the relevant business unit head) and Informed (the board observer, the CRO). The committee, in this model, does not exist as a standing body. The Consulted parties are engaged when a decision requires their input — not on a monthly schedule, but on a decision-triggered schedule.
This is not a radical idea. It is how every other operational decision in the organisation is made. The finance team does not have a monthly committee to review every invoice. It has a named approver, a policy, and a process. AI governance should work the same way.
The restructuring, not the abolition.
We are not recommending that you dissolve your AI governance committee. We are recommending that you restructure it.
The restructuring has three elements. First, reduce the membership to five people maximum. The committee should include the executive sponsor, the legal or compliance lead, the technical lead, the risk lead, and one rotating business unit representative. Eleven people cannot make decisions. Five people can.
Second, change the mandate from advisory to decision-making. The committee should have explicit authority over: approval of new High-risk AI deployments, review and sign-off on the quarterly registry audit, and escalation decisions when the operating meeting cannot resolve an incident. Everything else is handled by the named owner and the operating meeting.
Third, change the cadence from monthly to as-needed. The committee should not meet on a schedule — it should meet when there is a decision to make. In a well-run governance program, that will be roughly once per quarter. If it is meeting more often than that, the operating meeting is not working. If it is meeting less often than that, the program is healthy.
The committee, restructured this way, is not a governance instrument. It is an escalation instrument. It handles the decisions that are too significant or too complex for the named owner and the operating meeting to resolve. That is a legitimate function. It is just not the primary function of governance.
One test for your current structure.
There is a test we run at the start of every governance engagement. We ask the executive sponsor: if one of your AI systems produced a discriminatory output today, who would know about it by tomorrow morning, and what would they do?
In organisations with committees and no cadence, the answer is usually: it depends on whether someone noticed, and if they noticed, they would probably email the committee chair, who would put it on the agenda for next month's meeting.
In organisations with named owners and operating meetings, the answer is: the named owner would know because the monitoring dashboard would have flagged it, and they would escalate to the executive sponsor within four hours, and the system would be suspended pending review.
The difference between those two answers is not a matter of committee size or meeting frequency. It is a matter of whether governance is embedded in the operation or adjacent to it. The committee is adjacent. The named owner, the cadence, and the policy are embedded.
That is the whole argument against the AI committee. Not that oversight is wrong — oversight is essential. But that the committee, in its standard form, is not oversight. It is the appearance of oversight. And in a world where AI systems are making consequential decisions every day, the appearance of oversight is not enough.
