Designing AI Products
Practical guidelines for defining requirements for AI projects in an EU AI Act world.
You have an exciting idea, a proof of concept appears quickly, a demo impresses stakeholders and momentum builds.
However with the EU AI Act coming into force in 2024 and being fully applicable from 2026, making sure governance is in place, at the time of deployment is increasingly a regulatory concern which needs attention early. As Product Owners working with AI, we sit right in the middle of this tension. We are responsible for delivering innovation quickly, but we are also often the only role that sees the full picture, user needs, technical architecture, operational reality, and organisational risk.
Here is a simple AI Product canvas, not to slow teams down but to make sure the right questions are asked early, while the system is still on a whiteboard rather than in production.
The AI Product Canvas
Think of this as a thinking tool, not a compliance checklist. It is a twelve-block framework that can sit on a Miro board, run through during discovery, or live as a template in Confluence. Its purpose is simple: make sure an AI initiative is valuable, safe, and well-governed before a single line of code is written and give all your stakeholders, including cyber security an understanding of what you’re aiming to build.
1. Problem & User Value
Start with the fundamentals: who are you building for, what problem are you solving, and what does success look like? This sounds obvious, but it is often skipped. This section exists to prevent AI for the sake of AI.
If you cannot clearly explain the problem without mentioning the technology, the problem probably is not well defined yet.
2. Intended Use
One of the most important elements under the EU AI Act is intended purpose. Be explicit about what the system does, what it does not do, and who relies on its output. An AI that suggests itinerary options to a tourism operator is very different from one that makes booking decisions automatically. The boundary matters.
3. Risk Classification
The EU AI Act categorises systems by risk.
Ask three questions early:
Does this affect people's rights?
Is it safety-critical?
Does it automate decisions?
The answers determine whether your system falls into minimal, limited, or high-risk AI, and therefore how much governance is required.
4. Human Oversight
Oversight should not be added later as a control mechanism. It should be a design decision. Define early: who reviews AI outputs, when can humans override them, and which decisions require human approval. Patterns such as human-in-the-loop, human-on-the-loop, and human override shape both system design and user experience.
5. Data Sources
List every data source your system depends on. For each one, capture where it comes from, the type of data, and the potential risks. Reviews and feedback may introduce sentiment bias. Sensor data may have reliability issues. Historical booking data may reinforce patterns you do not want to perpetuate. Surfacing these risks early changes how you design the system.
6. AI Capabilities
Be clear about what you are actually building. Is it an LLM, a recommendation engine, or a combination of systems? Is the model built internally or provided by an external vendor? If you depend on third-party models, that dependency becomes part of your governance model, not just your architecture.
7. Guardrails
Define the behavioural boundaries of the system. What topics are restricted? How are outputs validated? What happens when the system cannot answer safely? Guardrails are not just a safety mechanism. They are product design choices. They shape how the system behaves.
8. Transparency
Users should know when they are interacting with AI. This is both a regulatory expectation and a trust signal. Simple things matter: disclosure messages, explanation interfaces, or labels such as: "This recommendation was generated using AI based on current data.
9. Security & Misuse
AI systems introduce new attack surfaces. Prompt injection, adversarial inputs, and misuse through APIs are real risks. Map them early and define mitigations such as input filtering, rate limiting, and access controls. It is far easier to design security into the system than to retrofit it later.
10. Logging & Traceability
If something goes wrong, you need to understand what happened and why. Define what gets logged: user inputs, model versions, AI outputs, and human overrides. This supports debugging, accountability, and regulatory audits.
11. Evaluation & Metrics
Success needs to be measured in two ways. Product metrics measure adoption, satisfaction, and task completion. AI metrics measure accuracy, hallucination rate, fairness, and model drift. Teams that only track product metrics often miss quality degradation until it becomes a user support issue.
12. Monitoring & Lifecycle
AI systems evolve, data shifts and models drift. What worked at launch may not work six months later. Define how you will monitor performance, detect bias, retrain models, and report incidents. Governance needs to be part of the product lifecycle.
Why This Matters for Product Owners
AI governance is often framed as something owned by legal or compliance teams. But Product Owners sit at the intersection of user value, delivery pressure, and organisational responsibility. That position gives us a unique opportunity to shape how AI systems are designed from the beginning.
We do not need to become compliance experts. But we do need to ask better questions earlier - questions about purpose, oversight, accountability, and impact. Utimately AI products are not just software systems, they are decision systems interacting with human lives.
And the choices we will make during discovery, the ones that happen quietly in workshops and whiteboard sessions, shape how those systems behave in the world. This is where Responsible AI starts.