The field of AI is quite different from traditional IT, as an AI system is driven by a probabilistic core. Therefore, for many experienced IT professionals, understanding AI is a challenge.
And while implementation is usually handled by well-educated AI engineers and data scientists, they rarely understand your requirements the way you do, and often tend to gravitate to technology demonstration when left without input.
It takes years to understand AI at the level of AI developers. However this course can teach you enough AI to define necessary requirements and guardrail.
Like any training by business Abstraction, this course is very practical and focused on your current needs.
While a course is always custimised to your needs, the following is used as the starting point for custiomisation:
Attendance of everyone is expected, inclluding management/executives
Understanding modern AI
Generative AI revolution
Lab: running AI prompts
Lab: experiementing with AI
Demonstration of other types of AI:
Predictive AI
Image AI
Time Series AI
Graph AI
Integrating Generative AI with other types of AI
Understanding Agentic AI
Use case anatomy (what “good” looks like)
user / decision / workflow context (where the output is used)
required level of correctness + allowable error
latency, volume, cost constraints
data sensitivity + privacy/security constraints
fallback behaviour (what happens when the AI is wrong/uncertain)
Define “quality” for this use case (this is the missing core)
accuracy / correctness (or task success rate)
groundedness / traceability (can we justify outputs?)
robustness (edge cases, adversarial prompts, ambiguous inputs)
bias/fairness considerations (where relevant)
auditability / recordkeeping requirements
Evaluation plan at “project-definition level”
what to measure, how to sample, what “pass” means
“gold set” examples + expected outputs
human review rubric (what reviewers look for)
minimum monitoring signals after go‑live
Risk framing
statistical risks (variance, drift, non-determinism)
operational risks (hallucinations, automation bias, misuse)
governance risks (unreviewed changes, missing evidence)
security risks (data leakage, prompt injection if using RAG/tools)
Lab: Use Case Quality Canvas (2–3 candidate use cases). Participants fill a one‑page template per use case, then score with a simple matrix:
Value
Feasibility
Risk
Evidence burden
Design the “human + AI” process so you can operate it safely: control points, responsibilities, and evidence.
Abstracting Processes
identifying primary objectives
identifying alternative outcomes
identifying dependencies
Process mapping: before → after
identify where AI adds value vs where it adds risk
define “human-in-the-loop” points (review, approval, escalation)
define failure modes + fallbacks (what happens when AI output is low confidence)
Controls and guardrails (this is what makes it “quality”)
input controls (what can be fed into the system)
output controls (review rules, restricted actions, disclaimers)
prompt/model change control (who can change what, and how it’s approved)
recordkeeping: what must be logged for audit/reconstruction
Operating model (who owns what)
roles: business owner, product owner, platform/ops, risk/compliance, security
a simple RACI for: approvals, incidents, monitoring, change requests
From pilot to production (go/no‑go gates)
what evidence is required before expansion
what monitoring is required after go‑live
what triggers rollback / incident response
Lab: Refactor one real process using the top use case from Half‑day 3
a “to‑be” process map with control points
a one‑page operating model / RACI
a go/no‑go checklist (minimum evidence pack)
A use case register (top 3–10 candidates)
For the top 1–2 use cases: a quality definition + acceptance criteria
A lightweight evaluation rubric (how you will judge outputs)
A refactored process blueprint for 1 priority use case
A control checklist (human review, logging, change control, escalation)
A “pilot → production” gating plan (what evidence is required)