
In my recent talk, “Leading in the Age of Lean AI,” at the Lean Global Network conference event I explored how leaders can cut through the noise around artificial intelligence and turn today’s tools into practical, value-creating systems. Rather than getting lost in the debate about “true AI” or artificial general intelligence (AGI), I offered a way to separate two very different AI stories and focus on the one that matters most for leaders right now:
How do we combine Lean thinking with today’s narrow AI tools so that Humans + AI > Problems?
The session revolved around five themes:
1. Two different AI stories: Big AI vs. narrow AI
2. What critics and optimists both get right
3. “Humans are end-to-end, LLMs are middle-to-middle”
4. The 5 Levels of Narrow AI collaboration
5. Why Technology × Behavior × Management determines results
A lot of today’s confusion comes from mixing together two separate curves:
- Big AI / General AI – the long-term quest for machines that can think, act, and learn as broadly as humans.
- Narrow AI / LLMs – the recent explosion of powerful but focused tools that excel at language, pattern recognition, and specific tasks.
Big AI has been moving slowly and steadily for more than 50 years.
From early expert systems to neural networks to deep learning, researchers have made real progress, but in small, incremental steps. Advancement in robotics and visions systems continue to improve incrementally. If we ever reach physical “robots” that can see, think, and act like humans it will be enormous in its impact—but we are nowhere near that level today. This is a marathon of gradual advances, not a sudden leap to “machine brains.”
What has changed suddenly is narrow AI, especially since the attention-based transformer architecture breakthrough in AI research. That research shift made it possible to:
- Train models on massive amounts of existing text and data
- Capture patterns in language, code, and images
- Put surprisingly capable tools into the hands of everyday users
On a graph, Big AI looks like a long, gentle slope. Narrow AI over the past few years looks like a vertical spike.
My point in the talk was not to argue about Big AI timelines. It was to say:
- Big AI is a slow, long-term development story. Important, but still distant.
- Narrow AI is a here-and-now management story. Its capabilities just jumped, and leaders need to understand, guide, and govern how it’s used.
That’s where Lean AI comes in: applying Lean-style problem solving to this new spike in narrow AI capability, so we can use it safely and effectively in real work.
I also used the metaphor “Torque vs. Tokens.”
- In the industrial era, we cared about torque – physical power delivered by machines.
- In the AI era, we care about tokens – units of language and information processed by models.
The underlying technology has shifted from mechanical horsepower to computational “word power,” but the leadership challenge is the same:
How do we turn a powerful new technology into reliable, safe, repeatable value in our specific context?
Robots, vision systems, expert systems, machine learning, and now large language models (LLMs) are all part of the same long story of technological waves. Each time, leaders have had to answer the same question: how do we make this useful and manageable for our people and our customers?
A quote I used in the talk captured one of the central ideas:
“Humans are end-to-end while LLMs are middle-to-middle.”
LLMs are excellent in the middle of many workflows:
- Generating options
- Drafting content
- Explaining concepts
- Suggesting code or next steps
But they are not good at owning the entire chain:
- Defining the real business problem
- Understanding local constraints and culture
- Making ethical trade-offs
- Taking responsibility for outcomes
That is the leader’s job and the team’s job. Humans remain accountable for framing the problem, verifying reality, and closing the loop. AI sits in the middle as a powerful, but bounded, collaborator.
The heart of the talk was a simple framework I use to describe how organizations can progressively adopt narrow AI.
Level 1 – Chat / Dialogue Box
Where most people start:
- Using tools like ChatGPT or other assistants in a browser
- Asking questions, getting drafts, doing quick “back of the envelope” thinking
Value: Personal productivity, fast learning, and quick idea generation. Easy to use.
Risk: Inconsistent use, no integration with how work is actually done. And sometimes the models just give you bad advice.
Level 2 – Prompt Engineering
At Level 2, we become more intentional about how we interact with the model:
- Structured prompts, templates, and reusable “scripts”
- Clear roles, constraints, and examples for the AI
Value: More reliable outputs and repeatable patterns of use.
Risk: Still mostly individual; quality depends heavily on user skill.
Level 3 – APIs & Coding Models
Level 3 moves beyond the browser into software and workflows:
- Calling models through APIs
- Using coding assistants to build small tools and automations
Value: AI starts to live inside processes—reports, dashboards, scripts, internal tools.
Risk: Difficult except for coding specialist to use. Harder for workers to adopt.
Level 4 – UI + RAG + System Tools
Level 4 is where business systems really start to change:
- Custom user interfaces tailored to specific roles (engineers, nurses, planners, supervisors)
- RAG (Retrieval-Augmented Generation) so the AI answers using your documents, standards, and data
- Integration with internal tools and workflows
Value: High relevance, higher trust, and real leverage on cycle time and quality of problem solving.
Risk: Requires serious work on data quality, knowledge management, and user training.
Level 5 – Agentic Systems & Fine-Tuning
At Level 5, we combine everything above into more autonomous, “agentic” behaviors:
- AI agents that can plan small tasks, call tools, and hand off work
- Models fine-tuned or specialized on your domain (within reason)
Value: Larger chunks of knowledge work can be supported or semi-automated.
Risk: Over-automation, loss of transparency, and the temptation to remove humans from critical loops. Very fragile solutions. Works only for a very narrow domain space.
Across these five levels, the key message to leaders was:
Start where you are, move one level up at a time, and stay anchored in real problems and users — not in technology for its own sake.
Throughout the talk, I emphasized that technology alone is never the full answer. The formula I used was:
Results = Technology × Behavior × Management
- Technology – Models, tools, infrastructure, data
- Behavior – How people actually use the tools: habits, skills, standards, and learning
- Management – How leaders set direction, allocate time, create standards, and remove barriers
If any one of these factors is near zero, the overall product is near zero.
That’s why I frame this era as “Lean AI” rather than just “AI”:
- Lean provides the problem-solving discipline and respect-for-people mindset
- AI provides new capability for analysis, creativity, and speed
- Management systems tie it all together and keep the focus on customers and value
I closed with a simple proposition:
The goal is not “AI instead of humans,” but Humans + AI > Problems.
For leaders, that means:
- Staying skeptical of hype but curious about new capabilities
- Building structured ways to experiment and learn
- Focusing on specific problems, not vague “AI strategies”
- Developing people so they can use these tools wisely, safely, and creatively
If we do that, AI becomes another powerful element in the long history of improvement—standing alongside TPS, Lean, Six Sigma, and other methods—not replacing them, but extending what thoughtful, well-led team