About one year ago, I made the leap to independence. The circumstances were characteristically terrible (lumbar fusions are no joke) and characteristically right (the window to do deep work on GenAI's impact on enterprise services wasn't going to stay open forever). I'm grateful to report I'm back to 100 percent now. It turns out you really should take the full recovery time even when your itch to be working is strong.

More importantly, I'm profoundly grateful for the year that followed. To everyone who reached out personally during recovery… your messages mattered more than you know. To the clients who trusted me with their toughest strategic questions, from PE operating partners wrestling with portfolio company positioning to services firm boards navigating existential pivots to tech platform providers orchestrating specialist ecosystems… you've given me a masterclass in how GenAI is actually reshaping value creation. And to the former colleagues, new friends, and fellow practitioners who've shared their own pattern matching along the way… this work is infinitely richer for those conversations.

A year ago I had a thesis about how GenAI would reshape enterprise services and what it meant for PE value creation in this sector. I've now tested that thesis across engagements spanning consulting firms, information services, telecom services, technology platform players, and internally with PE sponsors themselves.

So here's the first year in accounting: What held up? What collapsed? What surprised me?

If you're an operating partner, board member, or CEO wrestling with whether your GenAI investments are theatre or transformation, this is what the pattern matching looks like after a year battling out in the field.

The Core Thesis: What I Got Right (And Still Believe)

My fundamental view on GenAI in enterprise services has been remarkably stable. Here's what I said then, and what the evidence now supports.

The Product-Platform Shift Is Real

Then: Services firms need to move from pure projects to productised, repeatable assets. The diagnostics, tools, playbooks that get iterative from 1.0 → 2.0 → 3.0 releases very quickly.

Now: Every thriving engagement I've worked has this pattern. One client started with coaching conversations about AI readiness, evolved to a structured diagnostic platform with roadmap recommendations, and is now on version 3.0 with error logs, benchmark databases, and quarterly releases. The firms still doing only bespoke work are the ones where their board conversations turn uncomfortable.

Why it matters: PE sponsors increasingly value services businesses the way they value SaaS… recurring client revenue, defensible IP, measured unit economics. Pure labour leverage is getting marked down. I've seen this directly in portfolio company valuations. The businesses with repeatable, AI-augmented diagnostic tools are commanding premium multiples. The ones still selling undifferentiated hourly labour are facing compression.

Operating Model Rebuilding, Not Tool Bolting

Then: Winners will redesign the full stack. The data → models → workflows → client journeys — and not just deploy ChatGPT Enterprise and hope.

Now: The gap between "we have AI tools" and "we have an AI-ready operating model" is where most value destruction happens. I've worked with firms building genuine 5-year transformation plans that rewire pyramids, pricing models, delivery workflows, and talent development systems. The contrast with firms that bought Microsoft Copilot licences and declared victory is stark.

Why it matters: Boards and sponsors can now smell the difference between genuine transformation and vendor theatre. The "show me the P&L impact" conversation arrives faster than firms expect. One client's board gave them six months to demonstrate margin improvement or face hard questions about their AI investment thesis. They weren't bluffing.

The Apprenticeship Conundrum Isn't Going Away

Then: GenAI automates the grunt work that traditionally was used to teach our junior colleagues how to think. Firms must deliberately engineer new learning systems.

Now: This is the hardest, slowest-moving problem. No one has cracked it at scale yet. But the firms taking it seriously — building practice-wide training systems, redefining what "junior" means, creating explicit "how we develop judgement" curricula — are the ones not quietly panicking about their talent pipeline. The firms still running 2019 graduate programmes are haemorrhaging their best people within 18 months.

Why it matters: You can buy tools. You can't buy a 10-year cohort of developed talent. The apprenticeship rebuild is the long pole in the tent. As I explored in The Apprenticeship Conundrum, this isn't a training problem. It's a fundamental rethinking of how professional capability develops when the machine handles much of the execution and humans must focus on judgement, orchestration, and client navigation from day one.

Differentiation Moves to Judgement and Orchestration

Then: IQ-style analysis commoditises. EQ, critical domain & industry judgment, client navigation, systems design, and ecosystem orchestration become the scarce skills.

Now: My technology platform work made this visceral. I spent months helping a firm orchestrate specialist partners across a highly technical domain. Their technical analysis was table stakes. What got valued was who was framing the problem correctly, orchestrating specialist partners, translating across organisational boundaries, and keeping the client's nerve steady through uncertainty. The firms who could do that commanded premium rates. The ones who could only run the analysis were competing on price.

Why it matters: This shifts hiring, training, and leadership development priorities in ways most firms haven't internalised yet. The skill ladder is being rebuilt in real-time, and the firms that recognise this early are the ones building sustainable competitive moats.

What Evolved: From Theory to Battle-Tested Practice

My core thesis stayed stable, but where I focused and how I talked about it has shifted as I moved from conceptual problem-framing to operating partner-level implementations.

Early Days: "This Time Is Different"

Early 2025 framing: Broad diagnosis. Why GenAI is structurally different from prior tech waves. High-level architecture around productisation, apprenticeship disruption, IQ/EQ shifts.

What I was doing: Writing thought leadership. Building frameworks. Talking to rooms of people about why they should care.

What I learned: Everyone nods. Very few act. Conceptual agreement doesn't move organisations. I could deliver a brilliant keynote about the future of professional and business services, get enthusiastic feedback, and watch absolutely nothing change six months later.

Middle Phase: Operating Model Granularity

Mid-2025: Shift to concrete, gritty implications of this disruption: pyramids, skill ladders, pricing models, margin structures. And I moved hard to building real, GenAI-powered assets: diagnostic frameworks, repeatable assessment methodologies, services-as-software research.

What I was doing: Working inside client organisations. Helping them design 5-year plans, staffing models, technology roadmaps. Getting specific about "what would you have to believe" for different futures. Building board-ready presentations that modelled margin trajectories under different GenAI adoption scenarios.

What I learned: This is where the work lives. Boards want granularity, not vision. "Show me the three-year margin trajectory under different GenAI adoption scenarios, with supporting assumptions" beats "AI will change everything" every single time. I stopped writing 40-slide decks about future implications and started building 12-slide decks with three-year P&L bridges.

Current Phase: Board-Level Proof and Capital Allocation

Late 2025 – early 2026: Framing for PE sponsors, boards, and leadership teams. Evidence-backed GenAI impact stories. Ecosystem orchestration. Phased rollouts with clear KPIs and staged investment thresholds.

What I'm doing: Board presentations. Investment theses. Operating partner advisory. Helping firms translate their AI experiments into capital allocation narratives that sponsors actually fund. Building the bridge between "we think this could work" and "here's the measured impact with benchmark context."

What I'm learning: The conversation has professionalised faster than I expected. "Should we do GenAI?" is dead. "Which bets, in what order, with what proof points, on what timeline, with what downside protection?" is the live question. And boards expect answers grounded in comparable company data, not aspirational vendor case studies.

What Surprised Me (And What I Got Wrong)

A year of real work produces pattern breaks. Here's what I didn't see coming and where my initial thesis needed correction.

The Speed of Board Sophistication

What I expected: Boards would lag. They'd need education on GenAI basics before they could evaluate strategic choices.

What actually happened: Boards got smart fast. By mid-2025, every services firm board I've worked with has at least one member who's seen GenAI ROI models from three other portfolio companies. They're asking second-order questions about data ownership, workflow redesign dependencies, and absorption capacity bottlenecks. The bar for "credible AI story" rose much faster than I anticipated.

Implication: The window for experimentation without accountability is closing. Firms that haven't moved from "we're exploring" to "here's our measured impact with benchmark context" are getting uncomfortable questions. I watched one board give management 90 days to demonstrate P&L impact or face a hard reset of their AI investment thesis.

Tool Sprawl Is Worse Than I Thought

What I expected: Firms would over-buy tools but eventually consolidate around platforms.

What actually happened: As I explored in Tools, Tools, Tools, the proliferation continues unabated. Firms have 15+ AI tools with single-digit adoption rates, zero integration, and no proper GenAI governance. I worked with one client that had 23 different AI subscriptions across the firm. The CFO couldn't name half of them. Total spend: over £2 million annually. Measured productivity impact: effectively zero.

Implication: Operating partners, business MDs and CIOs need to get ruthless about platform strategy and sunsetting experiments that didn't land. The conversation has shifted from "what tool should we buy?" to "how do we not drown in our own tool stack?" I'm now spending as much time helping clients sharpen the use of their current AI portfolio as I am helping them evaluate new capabilities.

The PE Valuation Impact Showed Up Faster

What I expected: It would take 2–3 years for GenAI to materially affect services business valuations.

What actually happened: By late 2025, PE firms are already marking services businesses differently based on AI readiness. Businesses with defensible data assets, repeatable AI-augmented workflows, and measured productivity gains are getting premium multiples. Pure labour arbitrage models are facing compression. I've seen this directly in three exit processes where AI capability (or lack thereof) moved valuations by 1–2 turns of EBITDA.

Implication: "AI strategy" is no longer a nice-to-have for exit positioning. It's table stakes for maintaining valuation. Investment and operating partners who understand this are moving aggressively. The ones who don't are explaining uncomfortable mark-downs to their investment committees.

Different Service Archetypes Diverge Sharply

What I expected: GenAI would affect all service businesses similarly, but just with different timelines.

What actually happened: The impact varies wildly by archetype. Advisory and consulting firms face the apprenticeship conundrum and margin pressure from commoditising analysis. Tech-enabled services (diagnostics, platforms) see margin expansion if they invest well and productise effectively. Labour-intensive BPO gets squeezed from both sides — clients expect AI-driven cost reduction, but the transformation requires a capital deployment model most BPO firms don't have.

Implication: PE portfolio construction needs to account for these divergent trajectories. Not all "services" are the same bet in an AI world. The operating partners who've figured this out are already rotating capital from labour arbitrage plays into IP-driven service platforms.

Where This Is Heading: The Next Year's Agenda

If this is what year one taught me, here's where year two's work needs to focus.

From Proof of Concept to Scaled Deployment

The gap: Most firms have successful GenAI pilots. Very few have scaled them across the practice. The transition from "this works in one team" to "this is how we operate enterprise-wide" is the next frontier.

My focus: Helping firms design and execute scaled rollouts with change management rigour, not just tool deployment. This means governance structures, training systems, workflow redesign, measurement frameworks, and staged investment thresholds. The operating model work, in other words.

The Talent Development System Rebuild

The gap: We all agree the apprenticeship model is broken. Almost no one is building the replacement learning infrastructure at scale.

My focus: Working with firms to design practice-wide GenAI training systems, new skill ladder definitions, and explicit "how we develop judgement" curricula. As I argued in The Apprenticeship Conundrum Part 2, we need to move from "doers" to "builders." That requires deliberate, systematic investment in new capability development pathways.

Board and Sponsor Translation Layer

The gap: The best practitioners understand their GenAI impact. They struggle to translate it into capital allocation narratives that resonate with boards and PE sponsors.

My focus: Building the bridge to evidence-backed stories, benchmark-driven positioning, phased investment cases that connect operational changes to margin and growth outcomes. Helping management teams speak the language of value creation, not technology deployment.

True GenAI Cyborg Transformation

The gap: Many senior leaders in sponsors and portfolio companies talk a lot about the transformation going on around them. Very few are living what Ethan Mollick describes as the "GenAI Cyborg" life.

My focus: Providing education, learning, and insight directly to CxO teams and sponsor investor and operating professionals to deploy what they already have 15x better. Highlighting the critical decisions on the Target Operating Model, data engineering, and knowledge management approaches that will really matter. And opening the eyes of senior leaders on what "increasing returns to expertise" really means for them and their organisations.

What It Means for You

If you're reading this as an operating partner, board member, or services CEO, here's the pattern I'd pay attention to:

For PE operating partners:

Your portfolio companies with defensible data assets and repeatable AI workflows will outperform. Phase-gated rollouts with clear KPIs beat big-bang transformations every time. Different service archetypes require very different AI strategies, so don't template your approach across the portfolio. And the valuation impact is already showing up in exit processes, so position early.

For enterprise services firm boards:

The "exploring GenAI" phase is over. Boards now expect measured impact and margin trajectories, not pilot announcements. Operating model redesign (not tool deployment) is where value lives. Talent development strategy is the long pole in the tent, so start building the "replacement apprenticeship system" now, because it takes years to compound.

For fellow practitioners:

The core thesis holds, but specificity matters more than vision. Board conversations have professionalised faster than expected. The work is shifting from "why AI matters" to "how we scale what works." And the pattern matching is getting richer as more firms move from experimentation to scaled deployment.

A year ago I had a theory about GenAI in private equity and enterprise services. Now I have a dozen case studies, a stack of board decks, and a refined view of where the leverage points actually are. The thesis held up. The details got sharper. The work got harder and more interesting.

To everyone who's been part of this journey — clients, colleagues, fellow practitioners, and friends who checked in during recovery — thank you. Year two starts now.

j

Disclaimer: These views are my own and reflect no other organisation. They are current today but likely to evolve rapidly as our world, markets, and technologies do. Comments are welcome but please be constructive and civil. We are all trying to work out answers to this new world together!

Nota Bene: A friend asked me if I write these posts or does an LLM! I write all the words you see above. I do ask an LLM to critique it for me, identify any grammar errors, and fact-check my references. But the words all remain my own. These posts take me a long, long time to write; particularly this one, as it's been a year of intense learning compressed into a few thousand words. I hope you found it useful.