Part 1 of Series

The Apprenticeship Conundrum

Confronting Our Own Personal Monster

"It was on a dreary night of November that I beheld the accomplishment of my toils. With an anxiety that almost amounted to agony, I collected the instruments of life around me, that I might infuse a spark of being into the lifeless thing that lay at my feet..."

— Mary Shelley, 1st of January 1818

Just as Victor Frankenstein faced a fearsome storm on the night he brought his creation to life, we find ourselves amidst a tempest raging about the future of apprenticeship for high cognitive content professionals—accountants, consultants, lawyers, marketers, medical professionals, and more. The challenge is complex and multifaceted, but at its core, it boils down to a fundamental truth: without the relentless cycle of doing, learning, making mistakes, and correcting them, our talented young professionals will never evolve into the high-performing experts of tomorrow.

In the mix of all this societal-level disquiet, I'm going to start in a surprising place: ourselves.

Like Mary Shelley's Victor Frankenstein, we have created a new world of capabilities that vastly expand our human power through LLMs and other generative AI tools. And like the freezing citizens' reactions to Dr. Frankenstein's Creature, we have reacted to this transformational capability with fear, revulsion, regret, and rejection.

An Inevitable Force of Change

And yet, there have been many breathless studies published by leading firms, academic institutions, and government agencies about the extent of impact that these new technologies will inevitably have on our economy and workforce. My former colleagues at Accenture and McKinsey, for example, estimate economic impacts from widespread adoption of these tools to 10 and 23 trillion USD on the world economy up to 2040. These studies suggest that the rise of entirely new tech sub-industries, the increased speed of innovation cycles, and the transition of human labour to higher value-added tasks will be central to realizing this enormous economic windfall.

Human labour is also forecast to change substantially. McKinsey suggests nearly half of all today's work will be automated in the three decades between 2030 and 2060. Accenture was even more granular, estimating that roughly 40 to 45 percent of all working hours across the US economy will be impacted by these tools by 2038.

The Core White-Collar Disruption

What is important for this conversation is that the basic professional motions of early white-collar careers are being dislocated.

Much of our early years—across accounting, consulting, financial services, insurance, legal, marketing, media, medicine, private equity, amongst others—were consumed by collecting, aggregating, summarizing, analysing, synthesizing, drafting, and re-presenting information from a variety of sources into something of real impact to clients, enterprises, governments, and/or consumers. For example, I vividly remember one extremely senior McKinsey partner providing me four pages of detailed handwritten feedback on a 110-slide sourcing strategy for temporary labour. I'm sure most of you have similar memories seared into your brains.

Indeed, many of the firms that we all work in and with have been built on apprenticeship pyramidal structures that take younger colleagues on a journey to become 'master experts' in delivering this information processing and re-processing cycle.

However, as we saw in my prior post, Dr. Mollick at Wharton demonstrated in his study with BCG that many times, lower-performing and/or more junior tenured colleagues perform worse than the LLMs unassisted—beyond the triggering prompts. This raises our fundamental conundrum: in a world where the machine delivers better than (at least) the lowest rungs of our pyramidal apprenticeship model, why would you ever hire and train people in the same way we have been? And if we don't hire these people, how do we ever get the next generations of master experts?

After dozens of conversations, lots of reading, and a bit of my own middle school parenting, I tentatively have concluded that many of us are approaching this conundrum the wrong way round. We are starting this conversation in the places we came from and are now, rather than flipping our perspective future-back to what the next generation will really need from us to succeed — not in our companies of today but in the companies of tomorrow.

Barriers to Modernising Apprenticeship for Our AI Era

At least five things are material barriers to us systematically evolving our organizations' early talent development model to be better fit for our AI-augmented future:

1. Anxiety that what made us successful senior professionals isn't what will make the next generations successful.

Many great professionals have risen through the ranks of their firms because of their executional task perfection. We wrote the most perfect legal or marketing briefs. We pulled apart the financial statements most deeply. We summarized the greatest number of interviews, the best, and fastest.

Already, LLMs can do many of these tasks as well—or even better—than even the best of us 'master experts'. Yet, the capability and performance grids that we use to evaluate our junior colleague performance are now out-of-date and possibly even laughably quaint—already, right now, today.

2. Fear that we are rapidly going to become less relevant or irrelevant professionally as organization leaders ourselves.

Within many firms, there are often only a few 'intrapreneurs'. The opportunity cost of innovation and building new things in many of our sectors used to be very high. Therefore, firms across these high-value, white-collar professions tolerated only a limited number of leaders who really pushed the boundaries of domain, delivery, or governance innovation.

In this environment, it is not surprising that firms put natural limits on the number of builders versus executers that were elevated through the ranks—and doing is just going to be relatively much less important in the future than it is today.

3. Ignorance about what these new tools can do.

Too many senior professionals are still unaware of the full capabilities of LLMs and other generative AI tools. This lack of knowledge can lead to underutilization or misapplication of these technologies. Without a clear understanding of how these tools can enhance productivity and innovation, professionals may continue to rely on outdated processes, approaches, and tools.

4. Squeamishness that somehow these new tools are 'cheating'.

For many professionals, their academic training in natural or social sciences or the humanities often emphasized high quality, individual outputs. And you had to excel in creating these artifacts to even get on the white-collar professional ladder.

We clung quietly to the belief that our individual knowledge was valuable. LLMs overturn all this knowledge premium. It feels like cheating. It kinda is like 'cheating' in our traditional academic sense.

But it is also an enormous leap forward in productivity from days spent in archives categorized by the Dewey Decimal System, down to hours spent googling the internet for sources to… ten minutes of great prompt writing for a deep search agent to scour all available sources and give you a well-articulated answer.

5. Concern that the quality of output will be undermined because the physical actions we take actually help our thinking processes.

This fear is well-grounded in how the human brain-body connection works. Whether it is motor-sensory integration, external visualization, or enhanced recall through physical effort, there are many reasons why the physical act of performance (particularly handwriting) demonstrably improves our output quality.

As younger tenured professionals, we often had our senior colleagues 'bleed ink' over our memos and presentations—and then we physically corrected these mistakes. And both the mentor's and the mentee's thinking quality benefited from it.

Physicians Heal Thyself First

As we navigate the complexities of integrating AI into the operating and talent model of our firms, it is crucial to confront our own fears and misconceptions. The barriers we face—anxiety about changing success metrics, fear of professional irrelevance, ignorance of AI capabilities, squeamishness about perceived 'cheating,' and concerns about the quality of output—are deeply rooted in our human nature and our hard-won professional experiences.

However, just as Doctor Victor Frankenstein had to confront the consequences of his creation, we too must face the reality of the transformative power of AI. By acknowledging and addressing these barriers, we can begin to leverage these tools systematically and productively within our organizations.

In the coming posts, I will delve deeper into how we can adapt our apprenticeship models to better suit this fast-evolving landscape. We will explore what skills we need to apprentice, how we should approach that apprenticeship, what skills we will value and evaluate, and how we can build information-intensive, high cognitive content firms.

The journey ahead requires us to be open-minded and proactive in embracing change. By doing so, we can ensure that the next generation of professionals are not only equipped with the necessary skills but also capable of thriving in a world where AI plays a significant role. Let us begin this journey by first confronting and healing ourselves, understanding our own biases and fears, and preparing to lead the way in this new era of talent development.

Disclaimer: These views are my own and reflect no other organization. They are current today but likely to evolve rapidly as our world, markets, and technologies do. Comments are welcome but please be constructive and civil – we are all trying to work out answers to this new world together!

Nota Bene: A friend asked me if I write these posts or does an LLM! I write all the words you see above. I do ask an LLM to critique it for me, identify any grammar errors, and fact-check my references. But the words all remain my own.