Metamindz Logo

How to win over AI-sceptic, old-school developers

Winning over AI-resistant developers, the easy way. Hands-on guide.
How to win over AI-sceptic, old-school developers

AI adoption for old-school developers: How to win them over

I've been in enough engineering standups to recognise the look.

You mention AI tooling. There's a pause. Then: "I prefer to actually understand what I'm writing."

And look — fair point. Genuinely. But at this stage, that's not a technical position. That's a comfort zone with a technical paint job on it.

Here's the thing I've learnt after years of working with engineering teams: the resistance isn't really about AI. It's about identity. A senior developer who's spent 15 years being the person who knows — the one the team turns to — isn't threatened by a tool. They're threatened by the idea that the tool makes their hard-won expertise look less special. That's the actual problem. Miss it, and you'll design your entire adoption approach around the wrong thing.

And the cost of getting it wrong isn't abstract. It shows up in velocity. In your hiring budget. In the roadmap conversation where you're explaining why delivery is slower than it was 18 months ago.


Why the resistance is rational (from where they're standing)

Before you design a solution, understand the problem accurately.

A developer who's been writing production Python for 15 years isn't being obstinate. They've watched a dozen "game-changing" tools cycle through — most of them half-baked, poorly maintained, eventually abandoned. Their scepticism is earned.

Add to that: they've seen vibe-coded PRs. They know what it looks like when someone merges something they don't understand. That concern isn't imaginary — it's just being applied as a blanket policy when it should be a quality filter.

The framing that works: "We're not replacing your judgement. We're giving your judgement more to work with."

The framing that doesn't: "Efficiency. Future-proofing. Innovation." Corporate noise. They'll clock it in about four seconds.


What actually moves people

1. Let the tool do the selling, not you

Put Cursor in front of a sceptic and ask them to refactor something that's been sitting in the backlog for two sprints. Or navigate a part of the repo they don't know well. The first time an AI tool correctly identifies an architectural decision made four years ago by someone who left — and explains why it was done that way — something shifts. You can't manufacture that moment. You can create the conditions for it.

Internal champions work better than external trainers, every time. Find the two or three developers already quietly experimenting. Give them time, visibility, permission to run informal sessions. Peer credibility travels further than management mandate.

2. Target frustration, not workflow

Every senior developer has a list of things they hate doing. Boilerplate. Documentation that never gets written. Test coverage they know is missing. Migration scripts. That's your entry point — not "transform your workflow" but "here's the thing that wastes your Tuesday afternoons."

Cursor for code generation and navigation. Claude Code for context-heavy terminal tasks and reasoning through complex changes. GitHub Copilot if someone genuinely won't leave their IDE. Pick the right tool for the person — not the one that got the most airtime at your last leadership offsite.

3. Kill the workshop format

The moment you schedule a mandatory AI training day, you've lost half the room before it starts. Senior developers are allergic to being taught things they feel they should already know, in a room where looking confused has social cost.

What works: access, 20–30 minutes of unstructured time with the tool, then leaving them alone. No usage dashboards in week one. No one measuring "AI adoption rate." Build in psychological safety to experiment and fail — otherwise you just get people performing adoption rather than actually doing it.

4. Address the craft objection head-on

"I need to understand what I'm shipping." Correct. Non-negotiable. And — that's exactly the point. Reviewing AI-generated code is the same discipline as reviewing a junior's PR. You don't merge blind. You read it, question it, own it. If anything, the accountability goes up. Your name is on the merge.

Most principled objectors run out of objections when you frame it that way. The ones who don't aren't really objecting to the tool. That's a different conversation.

5. Set an expectation, not a prescription

"Within the next six weeks, I'd like everyone integrating at least one AI tool regularly into their workflow. Not prescribing which one — that's your call."

Low pressure. Clear expectation. No surveillance. That combination works. What doesn't work is setting no expectation at all and hoping culture does the job for you. It won't.


Three patterns I've seen play out

The "I'll believe it when I see it" senior engineer

Backend developer, 60-person SaaS company, 12 years in, openly dismissive in team meetings. The CTO stopped arguing and instead asked him to lead a small internal spike: use Claude Code to document a legacy authentication module that had never been properly documented — one that three new joiners had already struggled to onboard onto.

He came back two days later. Documentation done. Three edge cases flagged that nobody had previously noticed. And a note that he'd started on a second module without being asked.

He didn't announce a change of heart. He just quietly started using it. That's usually how it goes.

The team that needed a forcing function

25-person engineering team, Series B, roughly half sceptical. The decision came from the top: AI tools are now standard engineering infrastructure — same as Jira, same as GitHub. Not optional. Cursor and GitHub Copilot licensed for everyone, set up on day one of the next sprint.

No training programme. A Slack channel. One internal session run by the developers already using Cursor — actual workflow, no slides, just a screen share. Six weeks later: 80% of the team using at least one tool regularly. The remaining 20% were in performance conversations for entirely unrelated reasons (and honestly, that was probably always going to be the case).

The developer who came round through a deadline

Senior frontend developer at a scale-up. Vocal about preferring "real engineering" over AI shortcuts. Then a product deadline moved up by three weeks. He used Cursor for the first time just to get through the sprint.

After the deadline passed, he didn't stop.

Deadlines remove ideology faster than any argument.


What 8 weeks actually looks like

Not a theory — a rough playbook.

Weeks 1–2: Access and anchoring

License Cursor and Claude Code for the full team. Don't call it a transformation programme — it's a new tool in the environment. Identify the early adopters already experimenting. Have a quiet conversation: give them some time allocation, ask them to be available as informal resources. No formal role, no extra pressure.

Weeks 3–4: Peer-led introduction

One internal session, run by those early adopters. 45 minutes. Their actual workflow, on actual code — not a sanitised demo environment. No slides. Recorded and dropped in a shared Slack channel. Attendance strongly encouraged, not mandatory.

Week 5: Space to experiment

No team-wide directives. The Slack channel is active — small wins shared, sceptics asking pointed questions, people working things out between themselves. You're visible in the channel but not dominating it.

Week 6: First real signal

At a retro or engineering all-hands: "What's been useful, what's been rubbish, what do we want to try differently?" Not a progress review. A signal that this is being treated like any other engineering tool — iterate, improve, don't force it. It also surfaces who's found genuine value and is now internally credible.

Weeks 7–8: Normalisation

AI tooling starts showing up in how people talk about their work — not as a separate topic, just part of the standard toolkit. The holdouts are visible. More importantly, they're visible to each other. Peer normalisation does more in week 7 than any top-down mandate would have done in week 1.

After week 8 you have a clear picture: who's properly adopted, who's partially there, who hasn't moved. The last group is now a management question, not a tooling question.


The developers who genuinely adapt — and it's usually the experienced ones, once they're past the identity hurdle — become the most formidable people on the team. Deep technical instinct, dramatically faster execution. That's experience compounding, not being replaced.

The ones who don't will notice the gap themselves. Your job is to make crossing it easy before that moment gets uncomfortable for everyone.

It's a new world we live in. The window to do this gracefully is still open.

Won't be forever.