Issue No. 7 | Two conversations that stuck with me
Hey — Happy Tuesday!
Last week, I did my taxes with AI for the second year in a row. A couple of hundred pounds saved on accountant fees never harmed anyone. Small thing, but a real one. That's what useful AI actually looks like in practice — just a concrete result in your actual life. Though I did spare a thought for the accountant I had hired before.
But here's what I keep thinking about. For every story like that, there's a much bigger and messier question underneath: what does all of this actually mean for how we work, and for the people doing the work?
I've had two conversations recently that pointed in the same direction — and left the same question unanswered. Here they are.
The first: real skills will survive
A few weeks ago, I was at an EdTech event and got talking to a founder with three successful exits — sharp, experienced, not someone who dismisses technology lightly. His view was unambiguous. The talk about AI displacing professionals, roles disappearing, careers being upended? Hype. People who had real value before will have real value after. Critical thinking, problem solving, and domain expertise — those don't get automated away.
His challenge was specific. Not whether AI could automate a process or handle a transaction — there are plenty of examples of that working at scale. His question was about human-led operations. Complex businesses that run on judgment, relationships, and context that shifts. Show me AI reliably replacing that, he said. Not a demo. Not a pilot. In production, at scale. He hadn't seen it. And honestly, neither have I.
He was specific about who he meant, though. He wasn't talking about everyone. He was talking about people with deep technical foundations — physicists, mathematicians, and engineers. People whose skills are built on first principles, not on processes that can be replicated. Those people, in his view, are irreplaceable. The question he left hanging and that I haven't stopped thinking about — is what that means for everyone else.
The second: technical depth plus business judgment
At a different event, I spoke to a researcher in the AI space — and her view built directly on that.
First-principles thinking matters, she said. But the people who will be most valuable are the ones who can take that technical fluency and apply it to the business side. Run these systems, shape them, deploy them strategically. Understand both what AI can do and what a business actually needs.
If you can do that, you'll be more valuable than ever. If you can't — if you're competent and experienced but sitting outside that technical-business intersection — you're at risk. Not from AI directly, but from the smaller number of people who can operate at that level, and from leaders who are actively looking for them.
Two people. One consistent thread. And a question neither of them fully answered: what does this mean for the majority of professionals sitting somewhere in the middle?
Then this landed in my feed
And today, Anthropic made headlines with a job posting that stopped a lot of people mid-scroll. They are hiring an AI Research Engineer in London, with a salary band ranging from £260k up to £630k a year — and with stock options potentially pushing total compensation past £1m.
To put that in context: the median UK salary sits around £35-37k. In London, it can stretch to £47-50k. So even the bottom of that range is almost incomprehensible to most professionals. London has never been known for US-level tech salaries, but this signals that the demand for top talent in this field is extraordinary and fierce. And the gap between that and the average knowledge worker trying to figure out where they stand has never been wider.
Where I land, for now
What struck me about both conversations is that they're pointing in the same direction. Technical depth matters. Business judgment matters. The people who have both will be fine. The Anthropic salary band is just the most visible proof of that.
The harder question is what this means for the majority of professionals who are neither AI researchers nor first-principles engineers. Competent, experienced, valuable — but sitting outside that intersection. That's where I don't think anyone has a clean answer yet.
What I'm more interested in right now is what you're seeing from where you sit.
Are you feeling the pressure, or does it still feel abstract? Is your company talking about AI in ways that feel real, or does it feel like noise? Do you know where you stand?
If any of that is live for you, I'd like to hear about it. I'm setting aside time over the next few weeks for thirty-minute conversations — coffee in London if you're nearby, a call if you're not. No agenda. Just a genuine conversation about what's actually on your mind when it comes to AI.
If you're up for it, reply to this email and I'll send you a link to book a time.
— Gaziza