I Spent Six Months Studying AI Regulations. Here Is What Matters.
I spent six months studying AI regulations. Here is what matters.
In early 2025, I took six months off. Not a holiday. I sat down and read AI regulations. The actual text. EU AI Act, all 180-odd pages. DORA. NIS2. The UK AI Cyber Security Code of Practice. I sat the ISACA exams: CISM, CGEIT, CRISC. I did the ISO 27001 Lead Auditor qualification. It was not fun. But I had a suspicion that the regulatory frame was getting concrete enough to build against, and I wanted to know for myself.
Most CTOs I talk to know "AI regulations are coming." Very few have read the text. They've read summaries, LinkedIn posts, vendor slide decks. The summaries are not wrong, exactly. They just miss the parts that matter for people who build things.
After six months, here is where I landed. Three things actually matter for engineering leaders right now. The rest is noise, or at least noise for now.
1. EU AI Act, Article 14: human oversight
Everyone talks about the EU AI Act's risk classification. High risk, limited risk, unacceptable risk. That bit is straightforward. The part most people skip is Article 14, which deals with human oversight of automated decisions.
Article 14 requires that high-risk AI systems be designed so a human can effectively oversee them. Not rubber-stamp them. Effectively oversee. The regulation uses the phrase "human-on-the-loop," which is different from "human-in-the-loop." In-the-loop means a human approves every decision before it executes. On-the-loop means the system can act, but a human can intervene, and the human must have enough information to know when to intervene.
That second bit is the hard part. It means your system needs to surface why it made a decision, in terms a non-technical person can understand, fast enough that intervention is still possible. If your model makes a credit decision and the explanation arrives three days later in a JSON log file, you have not met Article 14. You have built a filing cabinet.
For engineers this means explainability is not a nice-to-have research project. It is a production requirement with a legal deadline. August 2026 for most high-risk systems. If your AI makes decisions about people (hiring, lending, insurance, medical triage), you need a human oversight interface that actually works. I have seen teams build beautiful explainability dashboards that nobody ever opened. That does not count.
2. DORA: it is not just for banks
DORA is the Digital Operational Resilience Act. It applies to financial services in the EU. Most tech leaders outside FS have ignored it. This is a mistake.
DORA requires financial institutions to manage ICT third party risk. If your company provides any technology service to a bank, an insurer, a payment processor, a fund manager, you are now in scope as a "critical ICT third-party service provider." That includes your AI infrastructure. Your ML platform. Your data pipeline. If a bank uses your model or your data to make decisions, DORA applies to you whether you think of yourself as a financial services company or not.
What does that mean in practice? Incident reporting within four hours. Regular resilience testing, including your AI systems. Contractual obligations around audit access, which means the bank's regulators can audit your systems. Exit strategies documented in advance so the bank can replace you if you fail.
I have talked to CTOs who discovered they were in scope for DORA because one financial services client made up 8% of their revenue. Nobody in their organisation had flagged it. They found out when the client's compliance team sent a 40-page questionnaire with a two-week deadline.
If any of your customers are in financial services, check your contracts. You may already have DORA obligations you have not started meeting.
3. UK AI Cyber Security Code of Practice 2024: the one nobody mentions
The UK published its AI Cyber Security Code of Practice through DSIT in late 2024. I have been in rooms full of senior tech people in London and nobody had read it. Some had not heard of it.
It is not legally binding yet. But "voluntary code of practice" in the UK has a pattern. It becomes an industry expectation. Then it becomes a procurement requirement. Then it becomes regulation. The UK followed exactly this path with the IoT security code of practice, which became PSTI law in 2024. The timeline from voluntary code to law was about five years.
The AI code covers the full lifecycle: design, development, deployment, maintenance, end of life. It is more specific than most people expect. It talks about supply chain security for model weights. It talks about adversarial testing. It talks about monitoring for model drift in production. If you are building AI systems in the UK, this is the closest thing to a technical specification of what "secure AI" means.
The 13 principles are not vague statements about "being responsible." They are actionable. Principle 4 says secure your supply chain. Principle 9 says monitor your system's behaviour in production. Principle 11 says maintain your system, including retraining and patching. Read it. It will take you about forty minutes. It is the best regulatory document I read in six months, because it was clearly written by people who understand what building software actually involves.
What matters less than people think
Two things get a lot of airtime that I think matter less for engineering leaders right now.
General-purpose AI model rules. The EU AI Act has provisions for foundation models and general-purpose AI. Unless you are training and distributing a foundation model, which you almost certainly are not, these rules do not apply to you directly. They apply to OpenAI, Anthropic, Google, Meta. Not to you fine-tuning their outputs for your use case. Stop worrying about GPAI obligations that are someone else's problem.
Model registration and national AI registers. There is talk in several jurisdictions about mandatory model registration. It may happen. But it is years away from being enforceable, and when it arrives it will be an administrative task, not an architectural one. Do not redesign your systems around a registration scheme that does not exist yet.
What to build now, before August 2026
If I had to pick three things to build into a platform right now, it would be these.
Decision lineage. Every AI decision your system makes should be traceable: what data went in, what model version produced it, what the output was, what confidence score it had, and whether a human reviewed it. This is not the same as logging. Logging records what happened. Decision lineage records why, in a way that can be audited months later by someone who was not there when it happened.
Autonomy budgets. Not every AI decision carries the same risk. A product recommendation and a loan rejection are not the same thing. Define how much autonomy each type of decision gets. Low-risk decisions can be fully automated. Medium-risk decisions get human-on-the-loop oversight. High-risk decisions get human-in-the-loop approval. Write it down. Make it a system configuration, not a policy document that lives in SharePoint.
Audit trails that a regulator can follow. Your audit trail needs to make sense to someone who is not an engineer. If your audit log requires a data engineer to interpret, it will not satisfy a regulator. Build the translation layer now. Date, decision type, input summary, output, confidence, human reviewer, outcome. Plain columns. Plain language.
Was six months worth it?
Yes. But not for the reason I expected.
I expected to find a mess. Contradictory frameworks, vague principles, nothing you could actually architect against. That was true two years ago. It is not true now. The regulatory frame, particularly the EU AI Act and the UK code of practice, is concrete enough that an engineering team can read it and start building. The requirements are not ambiguous. They are just long and boring, and most people have decided that is someone else's job to read.
It is not. If you lead an engineering organisation that builds AI systems, the regulatory text is as much your problem as the architecture diagrams. The deadlines are real. August 2026 is seventeen months away. That is not enough time to redesign your platform. It is enough time to add decision lineage, autonomy budgets, and audit trails if you start now.
The regulations are not perfect. Some of the definitions are clumsy. The timelines are tight in places and vague in others. But the direction is clear: if your AI makes decisions about people, you need to be able to explain those decisions, trace them, and let a human override them. None of that is unreasonable. Most of it is just good engineering that we should have been doing anyway.