If you want your people to experiment with AI, pay them to cock up spectacularly. Steve Salvin does. At Aiimi, his data-and-AI company, they run “Nobber of the Year” – an annual award for the biggest, most painful, most instructive mistake. It’s self-nomination only, the whole company votes, last year’s winner hands over the trophy, and the new winner gets a proper prize: a £2,500 city-break for them and their partner. It sounds daft until you realise what it unlocks. Psychological safety. If you want genuine adoption of new tools, especially something as unsettling as AI, you have to make it safe to try, safe to fail, and safe to share what you learned.

Something Steve said in our conversation really stuck with me: “We want to celebrate epic failures… learn from it and never do that again – times 200 people.”

Meet Steve Salvin

Steve has been working in tech since the 80s. He started at EDS (now HP), did time at PwC, moved into content management at OpenText, crossed into analytics at MicroStrategy, and then in 2007 founded Aiimi. The initial spark wasn’t “AI” in the sense people throw around on LinkedIn today. It was a simple, powerful irritation: inside most organisations the “content world” (emails, SharePoint, documents, videos, chat) and the “data world” (SAP, Workday, ServiceNow, data warehouses) live on different planets. They have separate owners, different governance, and precious little connective tissue. The information you need to understand what happened is there – but it’s scattered across systems, stuck in formats that don’t talk to each other, and hidden from the people doing the work.

Every company has those black boxes. But not every company is committed to finding them – and opening them – at scale.

Opening the black boxes

A pipe bursts. A job is raised. Somebody heads to site, fills out forms, talks to colleagues on Teams, pings a WhatsApp to a supervisor, fixes the immediate problem, and drives home. Weeks later someone tries to work out why that asset keeps failing, whether the customer is happy, and what we might do differently. The relevant evidence exists – in service tickets, emails, call recordings, maintenance logs, and personal notes – but it’s scattered. The human effort to reconstruct cause and effect is so heavy that nobody does it unless there’s a complaint or a crisis.

Where Aiimi earn their keep is in doing the forensic work machines are good at. They pull together those fractured trails, stitch the same entities across systems – the asset, the site, the customer, the engineer, the timestamp – and then let models do what models do best: spot patterns humans wouldn’t see and surface the practical suggestions that shrink time-to-resolution. One engineering client fed years of failures and maintenance jobs into Aiimi, then used the resulting knowledge base to recommend likely fixes as soon as a fresh incident arrived. It’s not a chatbot. It’s a quiet assistant that helps the person on the hook make the right move first time.

That last point matters. The headline isn’t the model. It’s the reduction in human faff. When the person in the field fixes something on the first visit because the system handed them the best-known playbook for that precise failure mode, you feel it in customer happiness and cost. You also stop relying on that one brilliant engineer who’s on holiday when the turbine throws a wobbly.

The comet of ChatGPT – and the mess it lights up

We talked about the last three years as if a comet hit the industry. ChatGPT landed, the internet lost its mind, and people who hadn’t built anything in a decade suddenly became “AI thought leaders.” Steve’s seen the whole arc up close. He’s studied AI since Manchester in the late 80s, so he has a clear distinction between what GenAI is and what it isn’t. It’s extraordinary at predicting the next token. It can synthesise language and code in ways that feel like magic. But the instant you wire it into enterprise data you collide with what he calls “enterprise chaos”: duplication, contradiction, missing labels, different versions of truth, and a culture that treats data quality as someone else’s problem.

This is why so many early projects have felt pointless. Meeting transcripts that nobody reads. Chatbots trained on policy PDFs that don’t actually contain the answers people are asking for. A smattering of Copilot licences that relieve guilt more than they improve workflow. The model isn’t the constraint. The inputs are. As Steve put it: “As soon as you connect language models to your data, it’s only as good as that information.” If your documents don’t answer the question, the bot hallucinates, trust collapses, and the guardians of “no” – legal, security, compliance – slam the brakes on anything with the letters A and I in it.

From parlour tricks to proper work

So what does useful look like? In Steve’s world the shift is from “prompt and parrot” to agentic processes. Instead of treating AI like a clever toy you chat to, you embed it behind the scenes doing heavy lifting: trawling millions of call recordings to cluster topics and identify the ones that actually drove complaints; mapping Slack conversations to incident outcomes so the next time that pattern appears the frontline gets a crisp recommendation; linking asset history to the combination of steps that historically resolved the issue fastest. None of this is sexy. All of it is commercially meaningful.

There’s also a cultural aspect to doing real work with AI that we gloss over at our peril. People don’t change their habits because the boss sends a memo. They change because someone they respect shows them a better way to do their job. Aiimi’s internal programme is cheekily named “drinking our own champagne”. People are asked, in performance reviews and weeklies, to show where they used the tools, what they automated, what worked, and what didn’t. The expectation isn’t perfection; it’s curiosity.

Caring isn’t fluffy – it’s functional

Seventeen years ago, putting “caring” in your corporate values looked soft in most tech exec teams. Steve did it anyway. He stripped out status markers – no private offices, no executive parking spaces, no PAs – and made benefits universal. Cynics would call that cosmetic. It isn’t. The point is to remove the everyday signals that tell people some voices matter more than others. When you’re trying to get people to try new tools, share their mistakes, and have the uncomfortable conversations that actually move performance, you need a baseline of safety and fairness.

This shows up beautifully in “Nobber of the Year”. The whole thing is designed to take the heat out of failure and convert it into institutional learning. You own your mistake by nominating yourself. Everyone reads, everyone votes, and then everyone remembers the story – and the fix – because it’s told with a human in the middle of it. The moment someone new messes up, the reflex isn’t “hide it”; it’s “get your nomination in,” which is exactly what you want. The learning compiles. Behaviours change.

The CEO’s job: Break the fiefdoms

Treat AI like an IT upgrade and it’ll die by steering group. This isn’t software, it’s an operating-model change. It’s the CEO’s job. Only you can smash the silos, set one definition of truth, pick one standard and make people use it. No “ten tools for the same job” bollocks — one stack, one owner, one metric. And hold leaders to account for adoption and outcomes, not pretty decks.

Curiosity is now a leadership competency. If your finance lead shrugs at AI, finance will too — and you’ll waste a year wondering why nothing stuck. I’ve told CEOs this straight: if your exec won’t lean in, you may need a new exec. Sounds harsh until one function goes first, proves the gain, and everyone else sprints to catch up. Momentum isn’t magic. It’s a choice.

Scaling pains and the art of stepping back

One of my favourite exchanges had nothing to do with AI. I asked Steve how many of the early leaders are still on his exec team. “Two,” he said, grinning. “Me and my wife,” who is now Chief Finance & People Officer. Everyone else changed as the business scaled. That’s normal, but bloody hard. Founders are loyal. Early leaders feel like family. But the skills that gets you to thirty people is rarely the one that takes you to three hundred.

Steve credits a few years in a Vistage group for forcing his hand. He went in proud of still billing, still doing workshops, still being “in it.” He came out realising the job was to make himself operationally redundant so he had the headspace to think and communicate strategy. It’s the oldest line in the book and we all relearn it the hard way: if you disappear for a month and the big rocks don’t move, you don’t have a leadership team – you have a dependency network with you at the centre.

Why AI matters now

We’re over the AI pantomime. The real enterprise wins – the ones I’m seeing and the ones Steve’s working on – are gloriously unsexy. Take a clunky, high-friction job and make it smooth. Collect what your company already knows, connect it, and put it where it actually helps. Turn lone-wolf heroics into a repeatable system. And do it in a culture that chooses truth: say the hard thing, try the new thing, share the f*ck-ups, bank the learning.

That’s why this chat was fun. Less “AI will buy your groceries,” more “AI will chew through two million calls so your engineer fixes it first time.” Less “innovation lab,” more “one workflow, better this quarter.” And yes — a £2.5k prize for the ugliest mistake with the best lesson. That’s how you turn nerves into momentum.

Time to get started

If you’re serious about AI, start where Steve has: open your black boxes. Decide where knowledge hides in your business, pull it into the light, and give your people power tools that actually help. Make curiosity visible. Reward honesty and openness. Treat the model as the easy part and your data as the work. And if you want help turning that into a simple, sequenced plan your team can deliver, you know where I am. Bring your black boxes; we’ll open them together.