By 2026, Every Candidate Claims AI Skills. Here's How to Find the Ones Who Mean It.

We've spoken to hundreds of ecommerce hiring managers in the past year. Almost all of them have added some version of 'AI experience required' to their job briefs. Almost none of them have changed their interview process to actually assess it.

The result? AI fluency has become one of the most over-claimed and under-tested attributes in ecommerce hiring right now. This article is a practical guide to fixing that - so you stop hiring based on buzzwords and start hiring based on evidence.

Why the Standard Interview Process Fails Here

Traditional interviews are built around past behaviour and hypothetical scenarios. Both formats are easily gamed when it comes to AI.

A candidate who's watched a few YouTube videos and used ChatGPT twice can describe using AI confidently in a behavioural interview. They've absorbed the language. They know what the right answer sounds like. And without probing, you'll hire them thinking you're getting someone who's genuinely transformed their practice.

"The language of AI fluency is easy to learn. The practice of it is much harder to fake when you ask the right questions."

The Interview Framework: Four Layers of Probing

Layer 1: The Specificity Test

Ask: 'Walk me through a specific example of how you've used AI in your current role - not a general description, but a specific project or task.'

Genuine users can do this immediately and in detail. They'll name the tool, the context, the prompt approach, the output and what they did with it. Candidates who've overstated their fluency will generalise, pivot to theory, or describe something they read about rather than did.

Layer 2: The Failure Test

Ask: 'Tell me about a time AI gave you a wrong or misleading output. How did you catch it, and what did you do?'

This is one of the most revealing questions you can ask. Experienced AI users have all hit the wall - hallucinations, confident wrong answers, outputs that sound right but don't hold up. If someone can't describe this experience, they either haven't used AI seriously enough to hit that wall, or they're not self-aware enough to have noticed.

Layer 3: The Built Something Test

Ask: 'Can you show me something you've built or created using AI - a prompt library, a workflow, an analysis, anything?'

The best candidates will have something to show. A prompt template. A Notion doc of their workflow. A piece of analysis they ran. A process they automated. If someone has been genuinely using AI, they leave artefacts - and they're usually proud of them.

Layer 4: The Limitation Test

Ask: 'In your role specifically, what should AI never replace human judgment on - and why?'

Sophistication with AI shows up in knowing its limits as much as its capabilities. The candidates you want can answer this confidently and specifically. They understand that AI is a force multiplier, not a decision maker. And they know exactly where in their function the human judgment must stay.

Red Flags to Watch For

  • Generic tool-listing: 'I use ChatGPT, Midjourney and Perplexity' with no context on how

  • Theory without practice: describing AI capabilities rather than personal application

  • No failures: an inability to describe a time AI let them down suggests shallow use

  • Tool-dependency without judgment: enthusiasm for AI without awareness of its limitations

  • Recency without depth: someone who started using AI six months ago after seeing it trend

Green Flags That Signal the Real Deal

  • They talk about iteration: 'I tried this prompt, it didn't work, so I refined it like this...'

  • They've changed a workflow, not just added a tool - AI has replaced something manual

  • They understand model differences: they know why they'd use one model vs another for a specific task

  • They can connect AI use to commercial outcomes: not just 'I saved time' but 'I generated X more in revenue / reduced cost by Y'

  • They're curious about what they don't know yet - they follow AI developments and test new tools proactively

A Note on Role-Specific Assessment

What genuine AI fluency looks like is different depending on the function. A performance marketer's AI fluency shows up in creative testing and audience segmentation. A CRM manager's shows up in personalisation and automation. A trading manager's shows up in forecasting and merchandising.

The questions above work across functions, but you'll get better signal if you tailor the specificity probe to the role. Ask about the function's specific AI use cases, and listen for whether the candidate can speak to them with genuine depth.

"The ecommerce community is small. The AI-fluent candidates you're looking for exist - but you need a recruitment partner who knows who they are, not just how to keyword-match a resume."

Revere specialises in finding ecommerce talent who are genuinely AI-fluent - not just AI-conversant. If you're hiring and want to know what great looks like right now, let's talk. revererecruitment.com.au

Previous
Previous

Hire a Paid Performance Specialist who drives real ROAS

Next
Next

What AI Fluency Actually Looks Like in Ecommerce in 2026