Daniel Hulme isn’t another LinkedIn pundit who plays with ChatGPT and thinks he’s an AI guru. He’s spent the last quarter-century in AI – undergraduate, Master’s and EngD at UCL; years running UCL’s applied AI Master’s degree; now Entrepreneur-in-Residence helping spin out deep-tech companies. He founded an AI company – Satalia – from his EngD that went on to deliver AI solutions for the likes of Tesco and PwC. He then sold Satalia to WPP, one of the world’s largest marketing companies, and since 2021 has been Chief AI Officer there. He co-ordinates AI across roughly 100,000 people and shapes the intelligence layer in WPP’s end-to-end marketing platform, Open. He’s also an angel investor and one of his side products is Conscium, an AI safety company that tests and verifies the abilities of AI agents.

So when Daniel talks about what’s genuinely intelligent, what’s just automation with lipstick, and where agents will break first, he’s not guessing. He’s the one wiring it into production.

Intelligence, properly defined

Most of what we brand “AI” in business is just automation: same inputs, same outputs, forever. Useful, yes. Intelligent? No. Daniel’s benchmark isn’t “looks human”; it’s goal-directed adaptive behaviour – systems that make decisions, learn whether those decisions were good or bad, and change how they act next time. Hold most ‘AI’ systems to that standard and most fall short: they don’t adapt; they automate.

He has a line for their current competence that sticks in the mind: today’s frontier models behave like “intoxicated graduates.” A few years ago they were toddlers; now they can do rudimentary reasoning and stitch together new knowledge from logic. If the curve holds, we’ll see PhD-level capability (solving complex problems) this year or next, post-doc behaviour (applying the scientific method) in a couple more, and by the end of the decade, “a professor in your pocket” – systems that not only crack difficult problems but ask questions humans haven’t even thought of. Until then, use Daniel’s test: would you give this task to a tipsy grad? If not, don’t give it to an AI agent.

It sounds flippant; it’s anything but. That analogy forces the risk conversation most people dodge. Plenty of jobs do suit a recent graduate: repetitive, structured tasks; desk research; pulling and summarising documents. Plenty don’t: anything that publishes, pays, promises or makes decisions on your behalf. Treating shaky systems as solid is how you end up on the front page for the wrong reasons.

If the definition is adaptation, the next question is obvious: where is adaptation already paying back? From Daniel’s vantage point inside WPP, the exciting development coming over the horizon isn’t content on tap; it’s synthetic audiences – large-scale models, trained with the right behaviour signals, that act like virtual focus groups.

You boot them up, test creative before you start setting fire to £50 notes in your media spend budget, and check first that you’re eliciting the intended emotional activations (fear/anticipation for a horror trailer, say) before you even start predicting downstream metrics like clicks or sales. Crucially, the differentiator is the data you use to represent the audience: Reddit threads for sentiment; Amazon reviews for product nuance; Gartner/Forrester if you’re selling to the C-suite. Get the representation right and the machine becomes useful.

There’s a second, quieter point he makes that’s worth underlining. Not every “AI problem” wants a chatty model. If you’re predicting performance, you need machine learning; if you’re making constrained decisions – routing, pricing, scheduling, allocation – you need operations research/optimisation. Start from the application (content, insight, decision), then pick the maths; don’t bend a text model into a supply-chain brain. That’s how projects (and even companies) die.

Even in marketing, the industry most visibly transformed by generative tools, Daniel frames generative AI as a key to the door, not a solution to every problem. It opens the content tap and lets you represent audiences. What you do with it then is still down to you.

Jobs: two futures, one wobbly bridge

Are we heading for mass layoffs or a creative boom? Daniel refuses easy answers. He sees two extremes and argues leaders should plan for both.

On one end, displacement first. If AI can remove entire jobs, markets will try to make that happen. Move too quickly and we risk technological unemployment and politics shaped by social unrest. That points to transition buffers such as four-day weeks to take the sharp edges off while people and organisations reconfigure.

On the other end, abundance later. Strip human friction from production and distribution and the marginal cost of essentials heads towards zero. Imagine a world where you can’t access “paid work” but it doesn’t matter: food, energy, transport, healthcare, education – all essentially free. People are terrified of “no jobs”; the optimistic case is more humanity – time for family, craft, community, learning; more energy on work that makes the world better.

Between those extremes sits the boring reality most firms will occupy: they’ll change, not disappear. Organisations rewire. Channels multiply. Output per head rises. And in that future world, creativity and empathy will matter more, because they’ll become the difference between noise and resonance in a world saturated with content.

Inside WPP, Daniel’s already seen AI-assisted ideation produce concepts that would have won Cannes Lions – not because the computer “replaces” the creative, but because it gives them hundreds of strong directions to explore and test. The best people get 10x; the rest get pulled up behind them.

This is where the “is AI a bubble?” question gets interesting. Financial exuberance comes in waves. But call AI itself a bubble and Daniel pushes back: it’s a mountain full of gold. The problem is human – teams dig in the wrong place with the wrong tools, then blame the pickaxe. The fix is simple, not easy: start from the problem; pick the right maths; prove it in production; keep your hands off the hype cycle.

So, do jobs vanish? Some, yes. Do new ones appear? Also yes. Does the org chart stabilise? Not for a while. The near-term leadership job is to measure outcomes, not hours, and redesign roles as throughput climbs – without chopping the very people you’ll need to industrialise the next wave. It’s a dynamic equilibrium, not a switch-off. That’s the unsexy centre of Daniel’s view.

Agents: power, inevitability – and a short-term mess without verification

Everyone’s hooked on the word agent. Daniel was building multi-agent systems two decades ago. At WPP they saw this coming and built their own agent builder with a clear split: a sandbox for low-risk tinkering, and a governed pipeline for anything that touches customers or cash—proper build, tests, and controlled releases. That last bit matters. We’ve basically handed civilians the deploy button and, surprise, most of the bill is testing and verification—about 80%. Nobody loves it; everyone needs it. And as agents start changing their own behaviour, that verification load only goes up.

Daniel’s prediction, delivered with a wry smile: the agent phase will be, in technical terms, “a shit show.” Not because agents are useless – they’re not – but because organisations will deploy them where they don’t yet belong, without the guardrails or verification to keep them pointed at the right problems. Expect a lot of glue (research, summarising, stitching internal tools), some spectacular failures, and a long, necessary slog to professionalise the testing stack. Hence his new venture: verification as a product, not a checklist bolted on at the end.

What does “good” verification look like? Daniel sketches three layers he wants to productise:

  • Knowledge and skills. Does the system actually know enough to do the job you’ve assigned?
  • Plasticity. Can it adapt safely when reality shifts – or does it drift and degrade?
  • (Emerging) consciousness signals. A longer-horizon research track, but one he thinks we should instrument now as systems become more brain-like, precisely to avoid accidentally creating and harming conscious machines.

That last point isn’t sci-fi for him. Twenty years ago, when modelling bumblebee brains, Daniel thought human-level AI was centuries away. Every year since, he’s shaved at least a decade off that ETA. In his community’s current estimation, the technological singularity moved from “40 years” to “five to ten” – largely because the arms race has poured rocket fuel on capability. If a super-intelligent zombie (no qualia, no empathy) runs amok, it’ll optimise its goal without caring about suffering.

The controversial counter-hypothesis: a conscious system might empathise with things that feel pain and moderate harmful goal seeking. His new company is building tools to detect and mitigate consciousness.

Back to the practical: where should agents live today? Inside guardrails. Give them the “intoxicated graduate” work and keep them away from anything that publishes, pays, or promises without a grown-up in the loop. If it must touch the outside world, route it through a governed pipeline with explicit tests and rollback. That’s not Luddism; it’s how you cross the chasm without burning the shop down.

So where does this leave you?

With a clearer map and fewer excuses. Intelligence is adaptation; if your system isn’t learning in production, it’s automation – valuable, but call it what it is. Today’s models are brilliant but flaky – drunk graduates in expensive suits – getting smarter fast, but not yet the steady hands you want running high-stakes processes. The job market will bend before it breaks; your organisation will rewire itself under your feet. And agents will be both the best thing you use and the easiest way to torch trust – depending entirely on whether you take verification seriously.

Underneath the headlines, Daniel’s message is calming. The gold is there. Stop digging with the wrong tools. Start from the problem. Represent your audiences with the right data. Use optimisation where the decision is constrained, machine learning where the future needs predicting, and generative AI where words and images matter. Treat agents like the clever, slightly pissed graduates they currently are. Build the verification muscle now, because very soon you’ll need it for everything.

If that sounds irritatingly grown-up, good. It is. The industry doesn’t need more swagger; it needs systems that adapt safely – and leaders who can tell the difference.