AI in Practice

What AI's second-order effects mean for your technology roadmap

What AI's second-order effects mean for your technology roadmap

In 1994, Netscape shipped the browser. Most executives saw a novelty. The second-order effects — Google, AWS, the collapse of entire industries — took a decade to fully land. Nobody saw them coming because they were watching the browser, not what the browser made possible.

AI is the new browser. The first-order conversation — "should we use AI?" — is already over. The second-order effects are what matter now, and they're going to hit your technology roadmap, your resourcing model, and your SaaS contracts harder than most people are planning for.

Here's what's actually at stake for technology and delivery leaders.

Your delivery function has a translation-layer problem

Every mid-market delivery function runs on translation. Business analysts who convert stakeholder requirements into specifications. Project coordinators who manage handoffs. PMO resources who produce status reporting. These roles exist because capability was expensive and specialist — someone had to bridge the gap between the business and the people doing the work.

AI collapses that gap.

Not immediately, and not completely. But the $80–150k business analyst who spends 60% of their time synthesising, documenting, and coordinating is a different value proposition than they were three years ago. Within 12–24 months, the resourcing question will be unavoidable: how many of these roles are we maintaining out of habit, and how many are genuinely irreplaceable?

This isn't a prediction — it's already happening in organisations that have deployed AI into their delivery workflows. The BA function doesn't disappear. It transforms. The people who remain are the ones doing judgment work, not translation work.

Start auditing your delivery function now. Which roles are doing synthesis and translation? Which are doing genuine expert judgment? The first category faces structural pressure. The second becomes more valuable.

The SaaS you're about to buy may not exist in three years

Software categories are collapsing. CRM, invoicing, scheduling, project management — these were categories because capability was siloed. You bought a CRM because it was the specialised tool for managing customer relationships. You bought a project management platform because that's where project data lived.

Agents don't respect category boundaries. They complete tasks across systems, pulling whatever data they need, writing whatever outputs are required. The workflow becomes the product. The category becomes a data source at best, irrelevant at worst.

The CRM you're evaluating for a three-year contract may be legacy spend by year two. The project management platform you're renewing may be half-redundant by the time the renewal hits.

This changes how you think about SaaS procurement. Long-term contracts on point solutions are increasingly high-risk. Shorter terms, more flexibility, and a strong preference for tools with open APIs and data portability — these aren't nice-to-haves. They're how you avoid getting locked into software that the market passes.

Before you sign any major SaaS contract this year, ask: would an agent workflow make this category optional within 36 months? If the answer is possibly, negotiate accordingly.

The autonomy question is a design decision, not a policy debate

Every organisation deploying AI internally hits the same wall: how much do we let the agent do on its own?

Full automation creates speed and scale but breaks trust when it goes wrong. Full human oversight defeats the productivity gain. The answer isn't a policy — it's a design pattern. For each workflow you're automating, you need to make an explicit decision about where the human checkpoint sits.

A simple framework: automate everything with low-stakes, reversible outcomes and high-volume repetition. Add a human checkpoint wherever a mistake is expensive, visible externally, or legally material. Never fully automate decisions that affect people's employment, client commitments, or financial obligations without a review step.

The organisations that get this right build AI workflows that feel trustworthy — fast enough to matter, supervised enough to catch the failures before they land. The ones that get it wrong either move too slow (trying to checkpoint everything) or burn trust publicly (automating too far).

Map your AI use cases against this matrix before you deploy. It saves you from the reputational cost of the case study you don't want to be.

Three decisions to make this year

First, audit your translation-layer roles before the market forces the conversation. Identify which delivery roles are doing synthesis work that AI will absorb, and start shifting those people toward judgment-heavy functions where the value holds.

Second, stop signing long-term point-solution SaaS contracts. Favour shorter terms, open APIs, and data portability. If a vendor can't articulate how they survive in an agent-first world, treat that as a procurement risk.

Third, define your autonomy framework before you deploy. For every AI use case on your roadmap, decide explicitly where the human sits in the loop — before go-live, not after your first incident.

The organisations that move on these three decisions in the next 12 months won't just manage the disruption. They'll outdeliver competitors who spent the same period debating whether AI is real.

It's real. The question is who acts first.

James Hallam
James Hallam
April 15, 2026