Top-Down AI Strategy & Epics Discovery
A recent MIT (Project NANDA) study reports a ~95% failure rate for enterprise GenAI pilots; only ~5% deliver measurable P&L impact. The report is preliminary but specific: ~300 public implementations analysed, 52 structured interviews, and 153 executive survey responses (Jan–Jun 2025). The main blockers are workflow fit, lack of learning/memory, and weak measurement — not model capability. AI News
In controlled deployments, AI does lift productivity when embedded in real work with guardrails: +14–15% for customer support agents; 55% faster task completion in a GitHub Copilot RCT; ~8–22% gains in field studies at Microsoft and Accenture. Oxford Academic, NBERarXivAn, MIT Exploration of Generative AI
Most enterprises — especially government — still design around paper‑era processes (“create a case, pass the file, seek approval”), so they bolt AI onto queues and forms. That approach reliably lands you in the 95%.
Gartner forecasts ~30% of GenAI projects will be abandoned post‑POC by end‑2025 due to unclear value, data issues, and weak risk controls. THE Journal
B‑CAID — Business‑Centric AI Design — is a discipline to redesign work so AI can deliver outcomes with embedded controls and learning loops. It starts from fundamental business needs and quantifiable high-level goals. It requires both understanding of AI and understanding of business.
MIT’s Project NANDA GenAI Divide study finds 95% of corporate GenAI pilots show no measurable financial impact; ~5% extract meaningful value. Adoption is high, transformation is low; “shadow AI” (personal ChatGPT/Claude use) runs ahead of official programs. External vendor partnerships succeed ~2× as often as internal builds, mainly because they ship workflow fit + learning instead of “demo‑ware.” Those are serious indicators, and they are not good.
The gap becomes even more pronounced if you look at the real results of "shadow AI" use by the champions, the experts. By the very nature of such use one has to rely on anecdotes rather than professional evaluations, so I will avoid providing estimations. Let me however say that the results are astounding - while using the same underlying models, like GPT-5.
Still, even the existing reports indicate that where AI is dropped into real workflows with memory, guardrails, and measurement, it performs:
Customer support: large‑scale field deployment → +14–15% resolved per hour (biggest gains for novices). NBEROxford Academic
Software engineering: RCT shows 55.8% faster completion of a coding task with Copilot; field evidence at Microsoft/Accenture shows ~8–22% more PRs per week after adoption. arXivAn MIT Exploration of Generative AI
Bottom line: it’s not the models. It’s the way we design the work — and whether the system can learn inside that work. AI News
For decades, IT taught the business to think in constraints: queues, forms, approvals — digital replicas of moving a cardboard file from desk to desk. A Business Analysis step still more often than not frames requirements as “update a record, then route it.” That’s how organisations take a capable technology and deliver a slower, riskier, more expensive system: add AI inside an unchanged flow; measure model metrics instead of time‑to‑outcome; block feedback/memory because “that’s not how our case system works.” This is the pattern MIT flagged in the 95% Generative AI implementation pilots. AI News
Traditional business‑analysis methods excel at traceability but tend to reproduce current processes rather than redesign work for AI. For example, the popular Use Cases, the methodology that its creator later distanced himself from, focuses on sequence of actions within one stepl of a business process. When Use Cases are provided to a project, nothing futrther can be done in terms of innovation. Most Agile practices rely on User Stories, which often focus on experience of internal users. And even if an AI project is given freedom within the framed constraints, it is still given within the same process, and judged similar to a human in such role.
The true era of cars started not when one put a petrol engine into a carriage, but when cars became designed as cars from the very beginning, when you could no longer see a carriage in a car.
In Enterprise IT, we are in the era of bolting AI into processes of the bygone pre-AI era, and being surprised by not seeing enough improvement. We need a new discipline - the discipline of AI Design, producing AI-centred solutions from ground up. Such discipline can be seen as roughly equivalent to recording Epics in Scaled Agile, or Business Capabilities in a more traditional practice, however requires significant understanding and use of AI.
AI Design, for the lack of a fancier term, is the discipline or the series of disciplines for creating AI-centric solutions for public and private enterprise. While it is only forming, and for some time will come in many flavours, expect certain commonalities between all possible approaches:
Requesting information from higher executives
Being obsessed about quantitative targets not based on existing processes
Questioning existing processes, clarifying decision rights, and discovering process-agnostic performance and success metrics.
Being surprisingly comfortable with the anarchy of plain text, probabilities and not knowing in advance what the next step should be.
Being adamant about validation, regular validation, and multiple rungs of validation
Being rigorous on risk. Interpreting risks quantitatively rather than conceptually or motivationally.
Pushing AI forward, as the gateway and controller rather than one of the workers.
NIST’s AI Risk Management Framework 1.0 and its Generative AI Profile (AI 600‑1) are the governance baselines we design into the workflow—not after it.
B-CAID is our approach to AI Design. It is the result of experience in delivering Strategic Business Analysis combined with practical experience of working with AI. It is created to position an AI pilot the way it is most likely to succeed and deliver.
B-CAID was created for projects an initiatives seeking to attain degree of control over the process of acquiring or developing an AI solution before major funding decisions are made. It is designed to fit under restrictive procurement thresholds, minimise risks, and provide transparency at every step.
B-CAID is practical method to redesign work so AI delivers measurable outcomes with built‑in controls and a learning loop.
Any B-CAID engagement delivers a statement of compliance with available guidelines
For Australia:
National Artificial Intelligence Centre: Voluntary AI Safety Standard
The Department of Industry, Science and Resources: Safe and responsible AI in Australia . It is a draft, however we are using it until the legislated guardrails are published
The Department of Industry, Science and Resources: Australia’s AI Ethics Principles
For U.S. Government Agencies
The U.S. National Institute of Standard and Technology Artificial Intelligence Risk Management Framework (AI RMF 1.0)
The U.S. National Institute of Standard and Technology Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
Any engagement delivers detailed evaluation of compliance to the applicable guidelines, as well as guiding documents provided by the customer
B-CAID is an analysis and architecture exercise, and as such doesn't require any changes to your environment. As a tiny consultancy without people on the bench, Business Abstraction does not benefit from creating more work, and will try to minimise the effort and investments you need to receive desired outcome.
To paraphrase the famous saying, three quarters of the factors on which a large prject is based are wrapped in a fog of greater or lesser uncertainty. Once you engage in a procurement of a major transformation, let alone into the project itself, there is no time nor capability to stop and think.
If you are similar to the 95% of the enterprises surveyed by the MIT review, you are not ready to understand the implications of project decisions. B-CAID removes hype, and delivers a clear picture of problems.
Quick and easy introduction to B-CAID is to invite a consultant for 10 days of review. Within the given time, a Consultant will:
Receive and review all written narratives and other background documents
Negotiate an engagement plan
Conduct interviews with Directors assigned to the project and the specified Senior Executives (stakeholders)
Request initial information from the Data team
Analyse all further available documents
Provide several options with quantified KPIs, implementation plan with several steps with staged costs, validation criteria for each step, and obstacles/dependencies.
For each option, provide detailed documentation and presentation materials
That is a fixed-cost plan delivering massive amount of information specific to the client's context, needs and intentions
You need to commit to:
Providing all relevant documents. Of course, we will get everything that is on your website, and extract more information from it than you believe is there. However we will also need documents that outline your vision, success criteria, and other contribution
Talking to the Consultant. Plan for 10-20 hours of interviews
Providing feedback. It is inevitable that somethng remain not said, not written down at the first iteration. Your feedback is essential. The Consultant will make it as easy as possible: instead of delivering a huge document, the Consultant will provide a list of assumptions in easy to understand form, grouped so each reviewer can limit reading to their area or interest/expertise
A contact person. We will need someone who can let a Consultant in an out through the security, arrange a meeting using your internal system - and ideally learning as much as they can
A fixed fee equivalent to 10 billable days, paid on completion.
Not much. Most of the information will be extracted from the public website, general industry knowlege and provided internal documents. the Consultant will also ensure that you
None, there is nothing hidden. You will receive a detailed report with volumes of information and several implementation options. You will receive a reference model to assist in evaluation of major proposals, or to give to your in-house team.
The same service, but with close involvement of 1-4 client staff, incorporating training and mentoring. In addition to a 10-day B-CAID, the Consultant will
Deliver 4 x half-day training sessions
Work with nominated staff to extract more information, and deliver more information to the stakeholders
Recommended for clients with security requirements at Secret and above, and for business clients with high information security requirements.
Explicit training sessions take time
It is assumed that such service would re requested for more ambitious undertakings, or more complex environment. Jumping through hoops takes time.
Yes, if it makes difference under your procurement rules. The Consultant will separate consulting days and training days clearly if that is your choice.
Within given time, at the cost of 60 billable days, the Consultant will deliver a pilot agreed on the basis of the 10-Day Review
Weeks 1–2: Implement a narrow solution on the basis of initially available data, and commercially available AI services. Demonstration of AI's place in the business rather than a generic presentation of AI technology
Weeks 3–7: Iterate weekly to hit thresholds (e.g., ≥50% straight‑through with <2% exception error); increase data availability and expand scope only after metrics hold.
Yes, however that should be agreed in advance. Obviously, if the Consultant cannot see the data, synthetic data should be used
That is the recommended way of implemnting the pilot. It can be implemented in a sandbox provided by the Consultant, or on the corporate cloud. It does not include implementing on-premises LLM.