"I Know Kung Fu": Expertise, IQ and EQ in the Age of AI

"I know Kung Fu."

With these immortal words, Neo (played by Keanu Reeves) introduced us to a transformative way for humans to process and internalize knowledge and skills. Similarly, LLMs (Large Language Models) and other internet-connected tools hold a similar promise for us here in the 'real world'. The power of the output of these tools is seductive: it's slick, well-written, and cogent.

However, the differential power of these tools still depends (today) on operator capability. The returns to operator expertise, IQ, and EQ for the best are (at least for now) exponentially greater than the average or even upper quartile manager. Ethan Mollick, a leading academic at the Wharton School, has published multiple fantastic studies on the differential returns of these tools based on operator skills. If you do not regularly read or listen to his material – including his recent book Co-Intelligence – I highly recommend that you do.

One of his studies done in conjunction with the Boston Consulting Group demonstrated the impact of these tools on many of the typical tasks of a strategy consultant by deploying LLM assistance or using LLMs on a standalone basis. If you haven't read this study, then you should.

On second thought, don't read it.

Go to your favourite LLM right now (doesn't matter which one) and ask it to summarize it for you. Actually, go to 2 or 3 different LLMs and ask each of them to summarize it for you using exactly the same input prompt.

We'll wait. Seriously, go ahead, we'll be here.

Okay, welcome back! Hope you found that exercise interesting. What did you learn? How different were the outputs? How confident are you that you really know what this study says?

Expertise Matters!

What Ethan Mollick's study and other similar exercises show is that the performance impact of LLMs on operators' output speed and quality can vary dramatically. This study (admittedly a few months old – which is dog years in GenAI world) showed that the performance gain for the very best performers assisted by the LLM versus the best performers without assistance was much more modest.

Did any of your LLM responses from your first prompt flag this nuance?

Okay, go back to your LLM and explore this nuance a bit more. If the answers don't surface Prof Mollick's term, "The Jagged Frontier," then prompt the LLM to explain it to you (if you don't already know it).

Operators with depth and experience can much more quickly and precisely guide the LLM to help them faster and more effectively than content novices. Why does this happen? Because expertise still matters! How?

  • Input / prompt quality: Even if the LLM is helping detail the output or shape deliverables, the quality of the input makes a material difference. If it's something new or highly different than a task the organisation or the operator has ever done before, then the LLM can struggle to provide 'expert quality' output.
  • Niche expertise: LLM training data is not consistently deep or compelling. What many LLM engines have trained on is the broad general corpus of human (often English language) knowledge. In highly technical fields or other specialties, the expert operator can exceed the LLM's knowledge net (at least for now).
  • Structuring: Humans have a unique ability to create structure on problems. While problem-structuring LLM models now exist, they do not always take the most efficient or effective route to solving problems (although again performance is improving rapidly).

Of course, there is also a risk posed by LLMs that even experts will become 'lazy' or over-reliant on LLM support. Then, the quality and speed of outputs may slow down. But so long as experts stay engaged, there are meaningful reasons why this human knowledge will be important.

Returns on IQ Also Increase!

Similarly, the debate over the value of expertise, academics, and practitioners are also engaged in a fight over the value of Intelligence Quotient (IQ) in the world of GenAI.

Some observers argue that since LLMs are inherently recombinatorial and probabilistic, they can never really invent anything 'new'. These partisans say that LLMs don't actually 'know' anything and therefore can't assign a deeper truth to the ideas they suggest.

Other observers take the opposite view: LLMs will eventually match and exceed even the highest IQ humans. Sometimes cast as a journey toward 'Artificial General Intelligence', these practitioners believe that LLMs exceed the calculating potential of any of us.

However, more practically today, there are increasing returns to IQ when coupled with LLM assistance. For example, in a recent conversation with some friends considering a new startup idea, we pivoted through seven or eight different business model analogies to grapple with their exciting but complex business idea. Such highly lateral thinking and ideation are difficult to replicate solely with an LLM. However, the speed and accuracy at which each of these analogies can be tested was substantially improved by the assistance of (several different) LLM tools.

In short, the smarter and more lateral-thinking the inputs, the more interesting and compelling the outputs.

Great EQ Matters Even More!

Somewhat paradoxically, the rise of LLM and other GenAI tools creates even greater returns to Emotional Intelligence (EQ). Traditionally, high cognitive content roles like law, accounting, consulting, product development, etc., often emphasized practitioners' IQ and content expertise over EQ. Indeed, you could often see performance reviews and client testimonials call out professionals that had unusually good emotional deft and tact.

LLMs make these emotion-management skills even more valuable for a few reasons:

  • Emotional context of responses: Critical to maximize impact for both individuals and organizations. The ability to prompt or agentically-engineer to create outputs that will productively spur action.
  • Stakeholder experience: Becomes more central as LLMs provide a minimum acceptable analytical response to most questions. How users experience that response, in what tone, and with what presentation will shape how users react to the information provided.
  • Cognitive dissonance: People lie. Or more gently, people will enable ongoing cognitive dissonance for far longer than an LLM because the organization is not emotionally ready to grapple with the implications of a negative result. LLMs dispassionately reduce or eliminate this cognitive dissonance when the available facts cannot support or cannot continue to support an agreed course of action.

Until agentic bots are only interacting with agentic bots exclusively, the peculiar eccentricities of human behaviour will remain important to navigate. I predict that the number of professionals skilled in human-machine interface, machine-enhanced organizational behaviour, and other similar disciplines will skyrocket.

The potential downside of not mastering the new EQ environment is significant. LLM output trust and agent autonomy can be undermined or outright dismissed with catastrophic results for organizational competitiveness. Second-guessing the reliability of these capabilities also can paralyze an organization to inaction – allowing precious time to tick by as value and customers leak away.

"Show Me."

In Neo's iconic scene, Morpheus (played by Laurence Fishburne) challenges Neo to demonstrate that Neo not only knows Kung Fu but can apply it. At first, Neo struggles with the challenge – as simply knowing Kung Fu is not sufficient to win with it. Only through practice and application does Neo start to learn how to use his new knowledge effectively.

And here we are with GenAI. I have been tracking my personal and professional use of LLMs and other new tools. I use them roughly 75 to 125 times a day (yes, a day). It's not a Google search engine. It's not an Oracle of perfect knowledge. But it is an extraordinary force multiplier in how work can be completed. When used well, it should save hours and even days of time for many high cognitive content professionals.

We should not become complacent as these tools are still new, have shortcomings, and can be imperfect. In places where you are expert, you'll be much faster to spot these deficiencies. In new content areas, you can become unjustifiably secure as you don't know what you don't know about the subject – and the LLM poorly prompted may not give you sufficient insight on important nuances. Expertise matters.

Even when highly performing, the LLM can still depend on how inspired the input is. The still uniquely human ability to draw tangential analogies remains powerful. Our ability to invent entirely new frames of reference or vocabulary drives innovation and progress. These tools will and are catching up. But they haven't fully surpassed us yet. IQ matters.

And taken together, even when armed with the most nuanced and innovative solutions, people are still people. They react in unpredictable and context-dependent ways. To drive impact through organizations, people will need to translate, shape, and apply the recommendations of LLMs and agentic bots across human stakeholders. Making sense of these emotional modifiers will be an escalating need for some time yet. EQ matters.

We are living in a very exciting time, but we should not forget the increasing returns to Expertise, IQ, and EQ. They will matter.

Disclaimer: These views are my own and reflect no other organization. They are current today but likely to evolve rapidly as our world, markets, and technologies do. Comments are welcome but please be constructive and civil – we are all trying to work out answers to this new world together!

Nota Bene: A friend asked me if I write these posts or does an LLM! I write all the words you see above. I do ask an LLM to critique it for me, identify any grammar errors, and fact-check my references. But the words all remain my own.