The Usual Sequence

In portfolio companies, AI work breaks at the handoff from demo to production. The problem is not model access. The problem is production readiness: release process, test coverage, data access, and the people directing the tools.

A recurring boardroom sequence looks like this: a competitor launches a feature, directors ask for an AI plan, a prototype appears quickly, and the work then slows once it reaches release controls, environment parity, or data preparation. The delay begins when the feature has to meet production requirements.

The ladder is a production checklist more than a maturity model. The five steps run from ad hoc tool use to customer-facing AI features.


The Five Steps

STEP 0 Basic AI Adoption

Individuals use ChatGPT, Claude, or copilots for ad hoc work. That matters, but it is not yet an organizational capability. Every company can buy the same tools. There is little advantage unless usage is systematic and tied to measurable output.

A board slide that says "the team uses AI" can describe nothing more than a few engineers with separate subscriptions and informal habits. When those habits are personal rather than institutional, the capability does not persist.

Treat Step 0 as a baseline. Useful, but insufficient.

STEP 1 Agentic Tooling + High-Agency Talent

The company has moved beyond ad hoc prompts. Engineers use agentic tools to draft code, tests, scripts, and operational work, and they can independently accept, revise, or reject the output. Tooling matters, but the limiting factor is high-agency talent.

This is where agency shows up. Strong engineers use AI the way a good manager uses a junior team: clear scoping, fast review cycles, and high standards. The tool expands the reach of an already capable person; it does not substitute for one.

A company can buy licenses in a week. It takes longer to identify who can use them well. That is why Step 1 is partly a staffing question.

STEP 1.5 Fix the Fundamentals—Using Agentic Velocity

This is the step many companies skip. Before shipping AI into the product, fix the operating basics: CI/CD, automated testing, security controls, environment parity, and release discipline. AI features do not remove the need for good software operations; they make the cost of weak operations more obvious.

Agentic tooling changes the economics of this work. In a recent engagement with a PE-backed SaaS company, we deployed a complete network re-architecture to production—replacing an insecure network with a secure one—within 30 days. A web application firewall that would have taken months to plan and deploy was operational in weeks. These timelines would not have been realistic two years ago with the same team size. That does not make the work optional. It removes the old excuse that the foundation will take too long to fix.

Step 1.5 is where a company becomes able to ship reliably. Without it, every AI initiative becomes a custom project with avoidable production risk.

STEP 2 Data Readiness

AI capability depends on clean, accessible data. The question is not whether the company has data in the abstract. The question is whether the underlying systems produce data that can be queried, joined, and audited without a side project.

Data readiness is not only a data science concern. It is also a product and engineering discipline concern: event instrumentation, stable identifiers, coherent schemas, and sane pipelines.

Many portfolio companies have years of accumulated data debt. Until that is addressed, the output quality of AI features will be uneven and hard to trust.

STEP 3 AI Features That Deliver Outcomes

Only at this stage does it make sense to build AI directly into the product or operating workflow. The supporting conditions are in place: the team can ship, the data is usable, and the organization knows how to measure whether the feature is working.

The distinction is not between "AI" and "non-AI." It is between features that are lightly wrapped around an API and features that are integrated into the product's economics, data, and customer workflow.

Step 3 should show up in operating metrics: faster resolution times, higher throughput, better prediction quality, or lower cost to deliver a specific result.

AI features usually fail at the production layer, not at the model layer.

Steps 1 and 2 do not produce flashy board slides. They do determine whether a feature survives contact with production.


How to Assess Where You Are

If you are trying to place a company on this ladder, the useful questions are operational. They are less about vision and more about what the engineering organization can already do on a normal Tuesday.

STEP 0 Diagnosing Basic AI Adoption
Questions to ask the CTO
  • "How are your engineers using AI tools day-to-day?"
  • "What's your policy on AI-assisted development?"
  • "If I asked three random engineers on the team what AI tools they use, would they all give me the same answer?"

If the answers are informal, inconsistent, or based on a few enthusiasts, the company is still at Step 0. That is normal. The mistake is describing it as a mature AI program.

STEP 1 Diagnosing Agentic Readiness
Questions to ask the CTO
  • "How many of your team members could build an internal tool using AI in a weekend?"
  • "Who on your team is pushing AI adoption without being asked?"
  • "Show me the last thing an engineer built in a day that would have taken a week two years ago."

Step 1 shows up when the CTO can point to specific people and concrete output changes: internal tools built quickly, repetitive work automated, release cycles shortened. If the story is only about licenses purchased, the company is not there yet.

STEP 1.5 Diagnosing the Fundamentals
Questions to ask the CTO
  • "How long does it take to deploy a code change to production?"
  • "Can you roll back a deployment in under 5 minutes?"
  • "Do you have staging environments that mirror production?"
  • "What percentage of your codebase has automated test coverage?"

These questions have concrete answers. Slow or fragile deployment, missing staging parity, and minimal automated testing all signal the same thing: the organization is not ready to scale AI work into production.

STEP 2 Diagnosing Data Readiness
Questions to ask the CTO
  • "Where does your customer data live? Can you query it without engineering help?"
  • "How clean is your data pipeline? When was the last time you found a data quality issue in production?"
  • "If I gave you an AI model that needed your last 12 months of customer interactions, how long would it take to prepare that dataset?"

The key issue is how quickly the company can assemble a usable dataset for a live problem. If the answer involves manual exports, inconsistent IDs, or multiple teams, Step 2 is unfinished.

STEP 3 Diagnosing AI Feature Maturity
Questions to ask the CTO
  • "Which product features currently use AI or ML?"
  • "Are those features driving measurable customer outcomes or just checking a box?"
  • "What's the feedback loop? How do you know if an AI feature is actually working?"

A genuine Step 3 company can explain impact in customer or operating terms. A weaker company talks mainly about the model choice or the demo.