I recently published a piece in Lian He Zao Bao, which is Singapore’s Chinese language newspaper. Learned that most of my friends couldn’t read it – so here it is translated to English!
=
In his recent Budget speech, Prime Minister Lawrence Wong signalled deeper national investment in artificial intelligence – from infrastructure and enterprise adoption to workforce development.
The direction is clear. AI will shape Singapore’s next phase of economic competitiveness.
But as organisations accelerate adoption, a quieter question is emerging. Why does adoption feel uneven, despite investment and visible productivity gains?
A product manager we know in Silicon Valley described how AI now allows him to complete work that previously required three to five people for research and cycles of iteration. Then he added, almost casually, “If I keep being successful at this rate, my own role will be redundant next year.”
When AI is framed primarily around efficiency – faster output, leaner teams, measurable productivity gains – individuals struggle to see how their long-term value increases within the system.
If improved performance appears to reduce one’s own relevance, it is rational to hesitate.
Raising everyone’s executive capability is the goal
Improving productivity is a legitimate starting point. AI can compress drafting cycles, accelerate research and reduce routine workload. These gains matter.
The more consequential question is whether AI adoption is also strengthening executive capability across the organisation.
By executive capability, we do not mean job title. we mean the cognitive responsibilities traditionally concentrated at senior levels – defining problems clearly, examining assumptions, connecting information into coherent context and deciding what should be done next.
AI can enhance these capabilities. Or it can leave them unchanged.
In some organisations, AI is used primarily to retrieve and refine output. A prompt is entered. A response appears. If it reads well enough, it is forwarded. Throughput improves, but the level of thinking remains largely the same.
In other settings, AI is used to refine reasoning. Individuals treat it as a second brain — an external space where they test assumptions, generate counterarguments, compare alternative framings and surface unintended consequences before presenting work.
Used this way, AI becomes a structured thinking companion. It does not replace executive capability; it strengthens it. Each interaction pushes individuals to define problems more precisely, examine their own assumptions, connect information into coherent context and decide what should be done next.
In one consulting company we worked with, AI was initially adopted by the head of strategy consulting to provide quick responses to repeating questions from juniors. Now, the team uses AI to transcribe client calls, helping team members analyze and reflect on their performance during the call, and giving juniors a sparring partner before they escalate to their manager. Conversations start at a higher level because junior team members arrive with refined thinking.
The quality of strategic thinking across the team is compounding in a way that was not possible – with less time spent by senior managers in training their team members.
Productivity gains remained. But alongside them, executive capability strengthened.
That compounding effect – efficiency combined with stronger executive capability – is what makes AI transformational rather than incremental.
Where Institutional Incentives Matter
If AI can enhance executive capability, then training and evaluation frameworks must reinforce that outcome.
Training that focuses only on tool proficiency will produce efficient operators.
Training designed to strengthen executive capability – clearer problem definition, stronger contextual reasoning and disciplined evaluation of options – will cultivate individuals who think at a higher level.
If AI success is measured solely by time saved, costs reduced or headcount optimised, those outcomes will dominate attention. But they do not show whether human capability is rising.
Alongside productivity metrics, we might ask: Are decisions improving? Are more individuals exercising structured reasoning? Are employees leaving roles with stronger capability to define and shape work than when they entered?
Those measures signal whether AI adoption is compounding long-term workforce resilience.
Skillfuture: From Getting Jobs to Building Job Creators
Singapore’s SkillsFuture movement has long focused on helping workers remain relevant as industries evolve. That discipline has underpinned national adaptability.
When routine analysis and synthesis can be automated at scale, the economic premium shifts upward – toward those who can define problems, identify opportunities and shape new directions of work.
Remaining employable in such an environment cannot mean merely aligning with the next predefined role. It increasingly means being able to shape work itself, where the work might be done by another person or an AI agent.
This does not mean everyone must become a startup founder. It means cultivating the ability to recognise unmet needs, assemble tools and collaborators, and experiment responsibly within or beyond existing structures.
AI, introduced with the right expectations, can accelerate this shift. When individuals use it to explore alternatives, test reasoning and prototype ideas quickly, they strengthen the capabilities that determine long-term economic value.
Even when roles evolve or disappear, individuals who have developed executive capability are not solely dependent on re-employment pathways. They are better equipped to define their next contribution.
A National Design Choice
Singapore has no natural resources. Our enduring advantage has always been the capability of our people.
As we invest in AI infrastructure, we are making a technological commitment. We should also explicitly commit to shaping the level of capability we expect across our society.
We can treat AI primarily as a productivity engine, optimising tasks and compressing labour.
Or we can treat it as a mechanism to strengthen executive capability at scale – expecting individuals at every level to define problems clearly, connect context rigorously and take responsibility for shaping outcomes.
In the AI era, our competitiveness will depend not only on technological infrastructure, but on how widely we strengthen executive capability – and whether our systems give people both the expectation and the incentive to exercise it.
That is the deeper design question beneath AI funding. And it will determine not just how efficient we become, but how resilient and entrepreneurial
===
Learn more and express interest in our AI Second Brain course for leaders here.
Karen Tay is Founder and CEO of Inherent, a leadership and growth consultancy which helps organisations and individuals navigate technological and strategic change.
Stephanie Sy is Founder and CEO of Thinking Machines, an AI and data firm that helps organisations make better decisions through artificial intelligence and data platforms.
