Beyond Automation: Raising Executive Capability in our AI Adoption Strategy

I recently published a piece in Lian He Zao Bao, which is Singapore’s Chinese language newspaper. Learned that most of my friends couldn’t read it – so here it is translated to English!

=

In his recent Budget speech, Prime Minister Lawrence Wong signalled deeper national investment in artificial intelligence – from infrastructure and enterprise adoption to workforce development.

The direction is clear. AI will shape Singapore’s next phase of economic competitiveness.

But as organisations accelerate adoption, a quieter question is emerging. Why does adoption feel uneven, despite investment and visible productivity gains?

A product manager we know in Silicon Valley described how AI now allows him to complete work that previously required three to five people for research and cycles of iteration. Then he added, almost casually, “If I keep being successful at this rate, my own role will be redundant next year.”

When AI is framed primarily around efficiency – faster output, leaner teams, measurable productivity gains – individuals struggle to see how their long-term value increases within the system.

If improved performance appears to reduce one’s own relevance, it is rational to hesitate.

Raising everyone’s executive capability is the goal

Improving productivity is a legitimate starting point. AI can compress drafting cycles, accelerate research and reduce routine workload. These gains matter.

The more consequential question is whether AI adoption is also strengthening executive capability across the organisation.

By executive capability, we do not mean job title. we mean the cognitive responsibilities traditionally concentrated at senior levels – defining problems clearly, examining assumptions, connecting information into coherent context and deciding what should be done next.

AI can enhance these capabilities. Or it can leave them unchanged.

In some organisations, AI is used primarily to retrieve and refine output. A prompt is entered. A response appears. If it reads well enough, it is forwarded. Throughput  improves, but the level of thinking remains largely the same.

In other settings, AI is used to refine reasoning. Individuals treat it as a second brain — an external space where they test assumptions, generate counterarguments, compare alternative framings and surface unintended consequences before presenting work.

Used this way, AI becomes a structured thinking companion. It does not replace executive capability; it strengthens it. Each interaction pushes individuals to define problems more precisely, examine their own assumptions, connect information into coherent context and decide what should be done next.

In one consulting company we worked with, AI was initially adopted by the head of strategy consulting to provide quick responses to repeating questions from juniors. Now, the team uses AI to transcribe client calls, helping team members analyze and reflect on their performance during the call, and giving juniors a sparring partner before they escalate to their manager. Conversations start at a higher level because junior team members arrive with refined thinking.

The quality of strategic thinking across the team is compounding in a way that was not possible – with less time spent by senior managers in training their team members.

Productivity gains remained. But alongside them, executive capability strengthened.

That compounding effect – efficiency combined with stronger executive capability – is what makes AI transformational rather than incremental.

Where Institutional Incentives Matter

If AI can enhance executive capability, then training and evaluation frameworks must reinforce that outcome.

Training that focuses only on tool proficiency will produce efficient operators.

Training designed to strengthen executive capability – clearer problem definition, stronger contextual reasoning and disciplined evaluation of options – will cultivate individuals who think at a higher level.

If AI success is measured solely by time saved, costs reduced or headcount optimised, those outcomes will dominate attention. But they do not show whether human capability is rising.

Alongside productivity metrics, we might ask: Are decisions improving? Are more individuals exercising structured reasoning? Are employees leaving roles with stronger capability to define and shape work than when they entered?

Those measures signal whether AI adoption is compounding long-term workforce resilience.

Skillfuture: From Getting Jobs to Building Job Creators

Singapore’s SkillsFuture movement has long focused on helping workers remain relevant as industries evolve. That discipline has underpinned national adaptability.

When routine analysis and synthesis can be automated at scale, the economic premium shifts upward – toward those who can define problems, identify opportunities and shape new directions of work.

Remaining employable in such an environment cannot mean merely aligning with the next predefined role. It increasingly means being able to shape work itself, where the work might be done by another person or an AI agent.

This does not mean everyone must become a startup founder. It means cultivating the ability to recognise unmet needs, assemble tools and collaborators, and experiment responsibly within or beyond existing structures.

AI, introduced with the right expectations, can accelerate this shift. When individuals use it to explore alternatives, test reasoning and prototype ideas quickly, they strengthen the capabilities that determine long-term economic value.

Even when roles evolve or disappear, individuals who have developed executive capability are not solely dependent on re-employment pathways. They are better equipped to define their next contribution.

A National Design Choice

Singapore has no natural resources. Our enduring advantage has always been the capability of our people.

As we invest in AI infrastructure, we are making a technological commitment. We should also explicitly commit to shaping the level of capability we expect across our society.

We can treat AI primarily as a productivity engine, optimising tasks and compressing labour.

Or we can treat it as a mechanism to strengthen executive capability at scale – expecting individuals at every level to define problems clearly, connect context rigorously and take responsibility for shaping outcomes.

In the AI era, our competitiveness will depend not only on technological infrastructure, but on how widely we strengthen executive capability – and whether our systems give people both the expectation and the incentive to exercise it.

That is the deeper design question beneath AI funding. And it will determine not just how efficient we become, but how resilient and entrepreneurial

===

Learn more and express interest in our AI Second Brain course for leaders here.

Karen Tay is Founder and CEO of Inherent, a leadership and growth consultancy which helps organisations and individuals navigate technological and strategic change.

Stephanie Sy is Founder and CEO of Thinking Machines, an AI and data firm that helps organisations make better decisions through artificial intelligence and data platforms.

Scaling Humanity at the Speed of AI

AI can scale intelligence. Leadership must scale humanity.

A managing director in a global consulting firm, doing all the right things to drive AI adoption. It’s not paying off in results – and his people are frustrated.

“They understand the promise of AI,” he says, “but the tools are half-baked and often the opposite of productive.”

A bright young woman who joined a tech company to protect children online. She’s struggling with the company’s new AI-first approach.

“Please don’t force me to use AI,” she says. “It simply doesn’t protect children the way we need to.”

An AI research engineer in a big tech firm in Silicon Valley laments:

“Instead of thoughtful bets, we’re running hundreds of experiments a day – throwing AI at a wall and seeing what sticks. It feels like we’re losing our purpose as a company.

As I coach leaders and teams through AI transformation – from Silicon Valley to Singapore to London – these thoughtful but quiet voices inevitably surface.

And they are worth paying attention to.

As leaders driving AI adoption, what kind of organizations are we creating? Are humans truly becoming better at what we’re uniquely good at? Or are we moving in the opposite direction?

In this article, I cover common leadership blind spots in AI transformation, and the remedies.

Blindspot #1: Rapid AI adoption is inherently good

Truth: This is more about values than you think

In boardrooms, success in AI adoption is measured by speed: more experiments, faster rollouts, higher revenue, cost-cutting.

What leaders often miss is that AI doesn’t just bring tools – it carries an ideology: that efficiency and productivity are the highest good.

Left unbalanced, that ideology reshapes what organizations pursue, reward and value.

Earlier this year, I worked with one of the world’s leading journalism organizations. Top leadership brought us in because they feared the newsroom was too slow and resistant to change.

In my time with their global team of journalists and staff, it became evident that absolutely no one was anti-AI. They simply did not have a space to articulate values which felt threatened by the push towards productivity and efficiency.

“I joined this company because our journalism stands for intellectual rigor and detail. Though AI could summarize a thousand page report in cut my process by hours, I’m afraid I’ll miss out on the color and detail which makes our reporting unique. Do we really want to lose that?”

Their resistance was not due to slowness or tradition. Like the young lady working on online child safety, a core value – a reason for joining the organization – was at stake.

This is the first balance: As AI brings its powerful ideology of efficiency, each of us needs to articulate, acknowledge and double down on personal and organizational values.

Leaders can unlock wiser adoption by asking:

  • What do we truly value about how we work?
  • How does AI threaten or support those values?
  • How do we adopt AI in ways that strengthen our mission?

As we closed our “Art of Bridging” session, another journalist reflected: “By naming where AI threatens our values, we can now reframe — how to uphold our values while finding better ways of doing things.” The group walked away excited and activated, with dozens of new ideas to try.

Have that conversation. Don’t assume speed is progress.

(AI carries a powerful ideology that efficiency and productivity are ultimate vAI carries a powerful ideology that efficiency and productivity are ultimate values.

Recognize this, and ask which other organizational values need to be reinforced.)

Blindspot #2: AI skills are the silver bullet

Truth: There’s a much more foundational skill to impart – human agency

At a conversation with women in business, a preschool owner shared that she introduced AI learning to five-year-olds. The kids lasted minutes – they were far more interested in physical building blocks.

It was a funny story, but I think the kids understood something we forget: foundations come first.

In building organizational capabilities in AI, it is easier to build new tools or technical skills. But we all know that tools, especially AI tools, evolve. Foundations endure.

So that are the foundational skills in the AI era? In short, the skill of human agency.

In the AI age, human agents will use AI agents. If you are a human who waits for instruction and direction, your value will fall.

If you have the skill of agency – a view, a goal, a theory of change – AI will supercharge everything you do.

Here’s my simple equation for human agency:

Agency = Personal mastery + domain understanding

  • Personal mastery is the motivation, self-belief, and adaptability to navigate rapid change.
  • Domain understanding grounds that agility in industry context and judgment.

Human agency is THE foundational skill. Everyone – from intern to executive – needs the ability to set direction for domain and self:

  • What am I here to build in this next chapter?
  • What’s my vision for this domain?
  • What human–AI partnership will enable it?

Having lived in Silicon Valley for almost a decade (2016-2025), and now in Singapore, I’ve observed that this is one of the biggest differences in talent mindset. In the Valley, every person sees themselves as an agent. In more hierarchical cultures, it is more common to look to leaders or organizations to define these for us.

The first balance was about values. Balancing the force of AI’s efficiency ideology with the force of your organization’s other values.

This is the second: as AI agents become stronger, humans must become stronger agents too.

Don’t just push your people to adopt AI tools. Make sure they have the skills to be human agents who can use AI agents effectively.

(Put the foundational skills in place. Human agency is the basis of all growth)

Building organizations that welcome (not just cope with) change – HOW?

Generative AI is reshaping work faster than any system, or individual, can comfortably adapt. Where re-organizations once took place every six months, they now happen continuously.

This calls us to increase our level of ambition when it comes to managing change.

  • Where we now ask: “How to help people cope with change?”
  • We need a reframe: “How do we build a system that welcomes change, and thrives amidst it?”

This requires a paradigm shift in the way we build organizational capabilities.

1. Developing agency across the organization

Most leaders want “alignment” – and try to achieve it by cascading directions.

In rapidly changing organizations, alignment cannot be handed down. By the time it trickles through the layers, the information is outdated. People get confused. Burned out. Resentful.

Today, alignment depends on every person having a clear sense of purpose and agency in their work. That means activating each player to their own “why” for this chapter – and putting it in their court to connect this “why” to the organization’s evolving mission.

In an era of rapid organizational change, people and institutions will continually re-design their alliance with each other.

Rather than a career path, it is a two-sided conversation: Are we still the right fit for each other’s next stage?

Practically, every manager and team member needs the skills and space to:

  • Clarify their purpose and priorities for this chapter.
  • Connect their unique skills to the organization’s evolving mission.
  • Design the right partnership between human judgment and AI leverage.
  • Know when to evolve — when a new season or role would unlock greater contribution.

When people have the foundational skill of agency, alignment stops being something leaders enforce – it becomes something teams co-create.

This new kind of career literacy is what we teach in Navigating Shifting Seasons – equipping people and managers to navigate continual transitions with clarity, maturity, and mutual respect.

(Creating a system of agency naturally drives alignment. Don’t try to drive alignment from the top down.)

2. Building true organizational resilience

The second task of leadership is to ensure that organizations have inbuilt resilience — the ability to flex and rebound through change.

In living systems, pressure without capacity causes breakdown. The same is true for organizations. When the rate of change increases, the ability to process and recover must rise with it.

Most workplaces already understand physical safety – fire drills, ergonomic chairs, first-aid kits, and a ratio of trained responders to staff.

In the age of AI transformation, we need the equivalents for psychological and emotional safety, to help people thrive amidst the strain of constant change, the uncertainty of new tools, the pressure to adapt before understanding.

What are the new shock absorbers and trampolines – systems that help people process stress, recover faster, and bounce back stronger after disruption?

At Inherent, we’ve developed and published research on one such system: Workplace First Responders. These are not managers or specialists, but peers trained to recognize transition stress, offer empathy, and help their peers move towards agency and strategy. These first responders have boosted their peers’ motivation (+43%), focus and productivity (+50%), confidence (+60%), reduction in burn-out (-34%) in statistically significant ways, as published in our research with Princeton University.

By embedding this capacity, organizations normalize uncertainty, give teams shared language to respond, and create support structures inside the work system.

This is just one intervention, and I’d love to hear what other organizations are doing.

Ultimately, when leaders design organizations that can absorb and rebound from pressure, transformation becomes not a threat, but a rhythm – a cycle of stretch, recovery, and renewal.

“Change adds stress. But change leads to growth, not trauma, when there’s adequate counterbalancing support. We need to rethink organizational resilience in the era of constant change.”

AI can scale intelligence, but only leadership can scale humanity

In recent research by Workday , most employees (83%) believe that AI will make uniquely human skills even more critical.

But will humans naturally become better at what we’re good at? No. In fact, the way that AI is being adopted focuses on less thinking, more doing; less questions, more answers.

AI can scale intelligence. But only leadership can scale human strengths.

If you are leading AI transformation, I appeal to you to counterbalance speed, cost cutting and productivity with three important capability-building strategies:

  • First, deepen vision and values. As AI brings its ideology of efficiency, leaders must deepen the articulation of values that give organization’s work meaning.
  • Second, build human agency. Leaders must deepen each person’s human agency: capabilities to set vision, take action, and find the right human–AI balance for their work. Agency is a foundational skill which allows continual adaptation from tool to tool, technology to technology.
  • Third, design for resilience. Leaders must treat emotional and psychological capacity as seriously, if not more seriously as physical safety. We must build systems that help people bounce back stronger from change, not just endure it.

If AI scales intelligence, leadership’s task is to scale the qualities that make intelligence human – judgment, empathy, purpose, renewal.

That is how we’ll know the transformation is working: not when AI runs faster, but when people grow deeper.

Do get in touch to share your experiences and opinions! karen at inherentjourney dot com.

About the author:

Karen Tay is the Founder and CEO of Inherent, a Silicon Valley + Singaporean company which designs and delivers research-backed leadership programs for organisations navigating AI-driven change. She has worked with clients such as Google, Linklaters, The Economist Group, and travels globally for assignments. Inherent’s work integrates behavioural science, strategic leadership, and deep human development.

A Princeton University graduate, former Singapore Government leader (AO), and fractional Chief People Officer to growth stage start-ups, Karen bridges the worlds of technology, public service and innovation. She is also faculty at Singularity University and an ICF-certified coach. Her work has been featured internationally for creating learning spaces that are rigorous, human, and deeply relevant.

Teaching on “The Art of Bridging: Discerning Hype and Driving Sustainable AI Adoption” – to 100 executives in Silicon Valley