Issue No. 4 | Enterprise AI, Real Talk

Hey there, happy Tuesday!

In the first three issues, I focused on AI through an individual lens — how it might affect your tasks, your skills, and your daily work.

This week I want to add a different layer: the enterprise view. What are organisations actually doing with AI right now? How is that shaping team structures, roles, and what gets valued in performance reviews?

Understanding both — the individual and the organisational — gives you a clearer picture of where things are heading and what to do about it.

Last week I attended the Google AI Deep Dive event in London, organised by Eurasia Hub. Two panel sessions. Practitioners from AI labs, enterprise software, banking, consulting, and AI startups. Different industries — yet strikingly consistent themes emerged.

Here’s what I heard. Some of it will confirm what you’ve already sensed in your own work. Some of it might surprise you.

Reality Check

The public conversation about AI does not reflect what practitioners are actually dealing with inside organisations.

In that room last week, nobody was talking about AI as a creativity tool or a productivity hack.

They were talking about security frameworks. Regulatory compliance. Deployment failure. Cross-border regulation. Governance gaps. Workforce change.

That's what AI looks like when it enters an organisation with real constraints. Serious adoption is happening — but it is moving through friction. The very human problem of leaders announcing AI strategies and teams that don't know how to implement them.

Agentic AI vs Human-in-the-Loop

One of the biggest themes across both panels was AI agents — systems that can perform sequences of tasks autonomously, not just answer a single question.

The honest answer from practitioners: fully autonomous agents in production are still early. Very early.

Most of what companies call "agents" today are really structured workflows — deterministic, rule-based processes with an AI interface on top. True autonomous agents, with memory, verification, reliability, and security at enterprise scale, have not yet been widely deployed. The labs are working on it. But we are not there yet.

What's actually running in organisations right now are human-in-the-loop systems — AI handling the predictable, repetitive parts of a workflow while humans remain responsible for verification, judgment, and accountability.

That's not a consolation. That's where the opportunity actually sits.

Why Adoption Is Struggling — And What That Means For You

The most consistent theme across both sessions: AI adoption is stalling not because the models aren't capable, but because the human and organisational layer isn't ready.

In practice, that looks like: governance frameworks that don't exist yet. Leaders are incentivised to announce strategies they can't deliver. Business and IT teams that don't speak the same language. Accountability structures nobody has fully defined.

The people caught in the middle — managers, analysts, project leads, consultants — are being asked to implement strategies they didn't design, with tools they weren't trained on, inside organisations that haven't figured out who is responsible when something goes wrong.

Managing that gap is skilled, senior work. Almost nobody has been formally prepared for it.

If you're mid-career, that gap is actually your opportunity — more on this in future issues.

What This Is Already Doing to Jobs

Despite the lag with full-on automation and the agentic AI promise, when I asked the panel about how the current state is already changing jobs and team structures — the answers were candid.

  • One speaker described how a team that previously had around 50 people has now been reduced to roughly 15 after parts of the workflow were automated. QA roles have been restructured and automated. Not in the process of being automated. Done.

  • A Machine Learning engineer described how his routine coding and execution tasks are now handled by coding agents. 90% of his time is focused on model architecture, feature design, orchestration, and performance judgment. The doing has been automated. The thinking has expanded.

  • Backend operational automation is also accelerating. In sectors like banking and consulting, reconciliation increasingly happens automatically between systems via APIs, while document-parsing systems and AI tools are starting to handle more internal support tasks.

  • In parts of different industries, performance metrics and bonuses are increasingly tied to demonstrating how you are using AI to increase output, reduce time on tasks, and improve quality. If you can’t answer that with specifics yet, it’s worth starting to think about it now.

What ties all of this together is role compression. The pattern is consistent: repetitive, verifiable, rule-based work goes first. If a task can be described as a checklist, AI can do it. QA and reconciliation are prime examples.

This isn't future tense — it's already happening. This does not mean there won't be new jobs. But the question is shifting. Less "will AI take my job?" and more "how do I show that I'm using AI to get to results faster, without compromising the quality of my work?"

It is no longer enough to simply be good at your job.
You also need to show that you can amplify your output with AI.

As always — I'm figuring this out alongside you.

One thing I'm curious about: has your company started tying AI usage to performance reviews yet — even informally? Let me know by hitting reply. I'm collecting real examples for a future issue and I'd love to know what you're actually seeing from where you sit.

Gaziza

📬 Enjoyed this issue?

One of the best ways to support this newsletter is to forward it to a colleague or friend navigating the same questions. It helps more people understand what's really happening with AI — beyond the headlines.

Was this forwarded to you? Subscribe here 👇.

Recommended for you