Beyond Automation: Raising Executive Capability in our AI Adoption Strategy

I recently published a piece in Lian He Zao Bao, which is Singapore’s Chinese language newspaper. Learned that most of my friends couldn’t read it – so here it is translated to English!

=

In his recent Budget speech, Prime Minister Lawrence Wong signalled deeper national investment in artificial intelligence – from infrastructure and enterprise adoption to workforce development.

The direction is clear. AI will shape Singapore’s next phase of economic competitiveness.

But as organisations accelerate adoption, a quieter question is emerging. Why does adoption feel uneven, despite investment and visible productivity gains?

A product manager we know in Silicon Valley described how AI now allows him to complete work that previously required three to five people for research and cycles of iteration. Then he added, almost casually, “If I keep being successful at this rate, my own role will be redundant next year.”

When AI is framed primarily around efficiency – faster output, leaner teams, measurable productivity gains – individuals struggle to see how their long-term value increases within the system.

If improved performance appears to reduce one’s own relevance, it is rational to hesitate.

Raising everyone’s executive capability is the goal

Improving productivity is a legitimate starting point. AI can compress drafting cycles, accelerate research and reduce routine workload. These gains matter.

The more consequential question is whether AI adoption is also strengthening executive capability across the organisation.

By executive capability, we do not mean job title. we mean the cognitive responsibilities traditionally concentrated at senior levels – defining problems clearly, examining assumptions, connecting information into coherent context and deciding what should be done next.

AI can enhance these capabilities. Or it can leave them unchanged.

In some organisations, AI is used primarily to retrieve and refine output. A prompt is entered. A response appears. If it reads well enough, it is forwarded. Throughput  improves, but the level of thinking remains largely the same.

In other settings, AI is used to refine reasoning. Individuals treat it as a second brain — an external space where they test assumptions, generate counterarguments, compare alternative framings and surface unintended consequences before presenting work.

Used this way, AI becomes a structured thinking companion. It does not replace executive capability; it strengthens it. Each interaction pushes individuals to define problems more precisely, examine their own assumptions, connect information into coherent context and decide what should be done next.

In one consulting company we worked with, AI was initially adopted by the head of strategy consulting to provide quick responses to repeating questions from juniors. Now, the team uses AI to transcribe client calls, helping team members analyze and reflect on their performance during the call, and giving juniors a sparring partner before they escalate to their manager. Conversations start at a higher level because junior team members arrive with refined thinking.

The quality of strategic thinking across the team is compounding in a way that was not possible – with less time spent by senior managers in training their team members.

Productivity gains remained. But alongside them, executive capability strengthened.

That compounding effect – efficiency combined with stronger executive capability – is what makes AI transformational rather than incremental.

Where Institutional Incentives Matter

If AI can enhance executive capability, then training and evaluation frameworks must reinforce that outcome.

Training that focuses only on tool proficiency will produce efficient operators.

Training designed to strengthen executive capability – clearer problem definition, stronger contextual reasoning and disciplined evaluation of options – will cultivate individuals who think at a higher level.

If AI success is measured solely by time saved, costs reduced or headcount optimised, those outcomes will dominate attention. But they do not show whether human capability is rising.

Alongside productivity metrics, we might ask: Are decisions improving? Are more individuals exercising structured reasoning? Are employees leaving roles with stronger capability to define and shape work than when they entered?

Those measures signal whether AI adoption is compounding long-term workforce resilience.

Skillfuture: From Getting Jobs to Building Job Creators

Singapore’s SkillsFuture movement has long focused on helping workers remain relevant as industries evolve. That discipline has underpinned national adaptability.

When routine analysis and synthesis can be automated at scale, the economic premium shifts upward – toward those who can define problems, identify opportunities and shape new directions of work.

Remaining employable in such an environment cannot mean merely aligning with the next predefined role. It increasingly means being able to shape work itself, where the work might be done by another person or an AI agent.

This does not mean everyone must become a startup founder. It means cultivating the ability to recognise unmet needs, assemble tools and collaborators, and experiment responsibly within or beyond existing structures.

AI, introduced with the right expectations, can accelerate this shift. When individuals use it to explore alternatives, test reasoning and prototype ideas quickly, they strengthen the capabilities that determine long-term economic value.

Even when roles evolve or disappear, individuals who have developed executive capability are not solely dependent on re-employment pathways. They are better equipped to define their next contribution.

A National Design Choice

Singapore has no natural resources. Our enduring advantage has always been the capability of our people.

As we invest in AI infrastructure, we are making a technological commitment. We should also explicitly commit to shaping the level of capability we expect across our society.

We can treat AI primarily as a productivity engine, optimising tasks and compressing labour.

Or we can treat it as a mechanism to strengthen executive capability at scale – expecting individuals at every level to define problems clearly, connect context rigorously and take responsibility for shaping outcomes.

In the AI era, our competitiveness will depend not only on technological infrastructure, but on how widely we strengthen executive capability – and whether our systems give people both the expectation and the incentive to exercise it.

That is the deeper design question beneath AI funding. And it will determine not just how efficient we become, but how resilient and entrepreneurial

===

Learn more and express interest in our AI Second Brain course for leaders here.

Karen Tay is Founder and CEO of Inherent, a leadership and growth consultancy which helps organisations and individuals navigate technological and strategic change.

Stephanie Sy is Founder and CEO of Thinking Machines, an AI and data firm that helps organisations make better decisions through artificial intelligence and data platforms.

A New Norm: Using AI as Your Second Brain, Not a Shortcut

By Stef Sy and Karen Tay

Me: “ChatGPT, help me improve this draft? Now try again. Give me a few more options.”

Also me: “AHHHH! Too many options, too much information, wrong tone! Wait, what am I trying to say, anyway?” slams computer shut and takes out a piece of paper to think.

Most people, like me (Karen) interact with Large Language Models (LLMs) this way, treating them like a slot machine – we drop in our coins (input), pull the lever, out comes a “better” output. Voila!

Let’s not deny that this approach brings huge productivity gains. But we also experience frustration: information overload without clarity, nice-sounding phrases lacking precision.

On the other side of the fence are technical people like me (Stef), who are experiencing something magical in how LLMs help my personal growth and productivity – but having a really hard time explaining this mental shift. “Oh no, what are you doing? Just telling it to “try again”? You have to relate to your LLM like a hyper intelligent staff who keeps growing…not a slot machine!”

The growing divide

At Davos, Chris Lehane from OpenAI referred to this divide as “the growing capability overhang… a widening divide between those who are able to use advanced AI tools deeply and productively, and those who aren’t.”

This gap impacts organizations, too. In our work with organizational leaders, we notice growing frustration with current paradigms of using LLMs. While work may “get done faster”, thinking might be less clear, details get missed and judgments are being escalated upwards.

When the workforce treats LLMs as a “slot machine” for productivity, it can unintentionally produce cognitive laziness and uniformity – traits which don’t serve an organization’s long term capability.

Developing a norm for using LLMs

In November last year, we (Stef and Karen), came to the topic from different places but landed on the same question: Is there another way? Can we develop a different norm in using LLMs – one which sharpens and strengthens humans?

As we developed this idea, we agreed on two ways LLMs should be used as a norm:

First, we should treat LLMs as long-term collaborators, not slot machines. If you think about your best working relationships, they were probably not one-way transactions. They were mutually sharpening, dynamic, and sometimes surprising in how they challenged you. The same should be for your relationship with LLMs.

Second, we should treat LLMs as amplifiers of our unique identity – our second brains. While the debate is ongoing, we do not believe there are globally correct judgment calls for most knowledge work, especially within organizations. Individual judgment, along with immediate context that no model can capture, will remain crucial. Humans should use LLMs to clarify and amplify their unique judgment, values and voice. Diversity, not uniformity, is the goal.

Getting to work on your AI Second Brain

Now, the question is: how do we build this new norm? And can we close the technical vs non-technical divide in the process?

Below, we cover three practical paradigm shifts that will sharpen and strengthen you as you use LLMs. These are your first steps towards building an “AI second brain”.

Don’t just query AI, get AI to interview you

First, imagine an AI second brain which makes your best decision-making and communication patterns transparent to you. It helps you clarify your unique voice, style and judgment as you use it.

How? This is the first paradigm shift: most of us “interview” AI for the knowledge we need. Instead, get AI to interview you for the knowledge it needs to do its job of extending your influence.

AI can be a dynamic interviewer, much like a skilled Chief of Staff or Executive Coach, who helps you extract your best work and decisions and identifies the patterns that make you the leader you are. We have out-of-the box “preference interviewer” prompts which you can feed to AI, to help it extract your decision-making, tone and voice.

Don’t just seek “improvements”, train AI to channel your unique identity

Second, imagine an AI second brain which channels your unique identity in its decision-making considerations and communications.

How? This is the second paradigm shift: most of us ask AI to help us “improve” generically, or intuitively. As AI is built on general standards of what “good” looks like, it will always fall short. Instead, train your AI system to be an extension of your unique identity, helping it get more and more precise at channeling your decision-making and voice.

One leader we know has eight “second brain*” AI systems for different domains: business strategy, sales and finance and operations – each with his specific guidance on how to problem-solve, which sources to go to, and how to handle sticky situations.

(*Eight specialized AI systems, each trained on domain-specific decision patterns.)

Don’t stick to one-off interactions, create a continuous feedback loop

Third, imagine a personal AI system which sharpens your thinking and meta-cognition each time you use it. It can observe you (with your permission), highlight blind-spots in your approaches, and give you feedback based on your growth goals… helping you improve as you use it over time.

How? This is the third paradigm shift: most of us use AI in one-off interactions. Instead, use it as an observer: you decide what you want to improve. You decide what AI observes such as your leadership meeting transcripts or personal communications. You decide how it pushes back on you, highlighting your blind-spots and nudging you towards improvement.

For example, one CEO wanted to improve her direct communication skills. With consent, she records her leadership meetings and pumps transcripts into AI. Her personal AI system gives her feedback on her improvements over time, giving nudges on how she could have re-phrased her comments. Feedback goes both ways: give your AI Second Brain feedback and opportunities to reflect on what it has learned – its precision improves as you tweak its context.

Getting started

Just one or two years ago, creating an AI Second Brain required coding knowledge. Today it is easily accessible to non-technical people – the only barrier is understanding, and of course, choice.

If this intrigues you, we have learned that the foundations of an AI Second Brain can be built within two-and-a-half hours for non-technical non-coders: not prompting tips, but an actual system for using AI to channel your unique identity, sharpen your thinking and close your blind spots.

Come design your second brain with us – join our waitlist here and we’ll notify you about upcoming sessions (both online and in-person)!


About the authors

Stef Sy is founder of Thinking Machines Data Science, a Philippines and Singapore consulting team that helps organizations design and build AI apps. Thinking Machines is an OpenAI Partner, helping organizations adopt and transform with GenAI.

Karen Tay is Founder of Inherent, a global coaching and learning consultancy, which helps leaders and organizations navigate transition with empathy and strategy. Drawing on research-backed methods and experience across Silicon Valley and Singapore, Inherent supports leaders through corporate programs, CEO advisory, and coaching for an AI-driven economy.