Technical Leadership9 min read8 January 2026

How to Build an AI Strategy When You Are Not a Technical Person

A CEO does not need to understand how transformers work to make good AI strategy decisions. But they do need a framework for evaluating AI opportunities without being captured by technical enthusiasm or vendor marketing.

AP

Ajay Prajapat

AI Systems Architect

The most consequential AI decisions in most organisations are made by people who did not study machine learning. CEOs, COOs, and board members set AI strategy, approve AI investments, and decide how much AI risk the organisation is willing to take. Whether they are equipped to make these decisions well determines whether their organisations use AI effectively or spend the next three years recovering from expensive experiments.

Strategic AI Leadership Is About Asking Better Questions

Non-technical leaders do not need to evaluate AI architectures. They need to ask the questions that hold technical teams accountable for the things that matter: business value, reliability, risk, and cost.

The most valuable AI strategy questions are not technical — they are business questions that happen to be about AI systems.

  • "What metric does this AI project move, and how will we measure that before and after?"
  • "What happens when the AI is wrong? Who catches it, and how quickly?"
  • "What does this cost at the scale we expect to run it at?"
  • "How long will it take to see a result we can evaluate?"
  • "What does success look like in 6 months, and what does failure look like?"

A Framework for Evaluating AI Opportunities

Three filters applied in sequence eliminate most of the AI opportunities that sound exciting but do not deliver value.

Filter 1: Is this a real business problem?

AI should solve problems that are costing the business measurable amounts: time, money, quality, or customer experience. "We should use AI for X" is not a business problem. "We spend 800 staff hours per month on X, and it still has a 12% error rate" is a business problem that AI may be able to address.

Filter 2: Can AI actually address the root cause?

Many business problems are data problems, process problems, or incentive problems — and AI does not fix any of those. AI cannot extract structured data from unstructured documents if the documents are not being created consistently. AI cannot improve decision quality if the decisions are made without the necessary information. Before approving an AI solution, ask whether the root cause is something AI can address.

Filter 3: Is the risk manageable?

What is the worst case if the AI is wrong, and how often might it be wrong? For customer-facing outputs, high-stakes decisions, or regulated processes, the risk profile of AI errors needs careful evaluation. Not all risk is disqualifying — but it needs to be understood, quantified, and managed before deployment, not after the first incident.

Avoiding Vendor Capture

AI vendors are sophisticated at creating urgency and FOMO. "Your competitors are already using this." "You will be left behind." "This is a one-time founding rate." Non-technical leaders without a framework for evaluation are the most vulnerable to this positioning.

The defence is simple: every AI vendor claim should be tested against the three filters above. If a vendor cannot clearly answer what business problem their solution addresses, how it will be measured, and what the risk profile is — the opportunity is not ready.

Ask every AI vendor: what business metric does this move, and how will I measure it? Vendors who cannot answer clearly are selling technology, not business value.

AI Systems Architect

Want to apply these ideas in your business?

A strategy call is where the thinking in these articles meets your specific systems, team, and goals.