Hiring for AI skills, but don't know where to start?
Your cheat sheet for hiring AI-focused people in non-technical roles.
Everyone is using AI. Or at least, that's what ‘everyone’ seems to be saying on LinkedIn.
With a tidal-wave of AI-related roles crashing on the job market, having ‘AI skills’ is a clear competitive advantage over other candidates. Beyond technical skills like coding, software engineering, LLM Ops, etc, it’s unclear exactly what anyone means by ‘AI skills’. And yet, despite the lack of clarity, companies are looking for them everywhere and expecting to find then.
In reality, AI tools haven’t been around long enough for many people to become experts. Generative AI was made publically available in 2022, when ChatGPT released its model to the world. That's less than 3 years. And while the sign-up and usage rates for Claude, Gemini and ChatGPT have been faster than any other technology, 30ish months is a relatively short time to become an expert in anything.
I've talked before about the divergence between people who 'use' AI, and those of us who cannot imagine doing our jobs without it. The capability gap is real. But if you're a manager or leader, how can you spot the difference between a casual AI user and someone who really knows what they are talking about?
Enter my question guide for interviewing for non-technical AI roles.
I haven’t outlined concretely a list of ‘AI skills’ to be ticked off in interviews on purpose. I believe that ‘AI skills’ for non technical knowledge workers is emergent, and it’s just too early to define concretely.
In this early age of AI, we need to explore someone’s direct experience with AI and Agents. Asking the right questions will go a long way to help you discern if they are someone who tries ChatGPT, or is emerging as an expert who can apply AI to different business contexts.
For interviewers, yes, you will need to have a good understanding of AI to assess answers given. If you’re new to this, there is a lot to learn, but there is no better time to start than now.
And for everyone else reading, I really believe these these types of questions will become commonplace in knowledge work interviews (i.e. - jobs that spend most time at a laptop / in an office to work). My guess in most interviews, you’ll be asked about your own AI use at minimum in the next 6 - 12 months. My advice is the same as above: start learning, now.
Questions about AI Product Experience
Question: "Tell me about a time you developed and deployed an AI-enabled tool / product / platform in your organisation. Walk me through the whole process."
Why this question? The nuances around product design and deployment can really only be learned through on the job experience. And AI brings a whole different set of challenges to traditional tech builds.
Good answers will include:
Clear articulation of the business problem to solve.
Walk-through of the development stages from concept to user-focused research and design, through to MVP to pilot, and eventually to scale.
Key enablers like data (including both training and pipelines), governance, scaling & adoption plans shows that they have a holistic understanding of how to launch a technology that works.
Other key things to look for: Design thinking, Product Ownership, User Adoption, Data Strategy (including quality, cataloguing, data privacy and ELT requirements) and Feedback Loops and Model Fine Tuning.
Follow-up Question: "What surprised you most during that process?"
Why this question? Curiosity, systems-thinking and humility are all uniquely human skills that are essential to creating AI products that people actually use.
Good answers will include: Honesty and self-reflection about what did and did not work in the design, development and launch.
Question: What are some of the biggest risks and challenges you see with using AI in the workplace?
Why this question? It shows a comprehensive understanding of AI beyond the technology or the hype-headlines.
Good answers could include:
[Minimum] - Standard considerations such as hallucinations, data bias, ethics, data privacy, cyber-security / data leaks, etc.
[Better] - A more comprehensive understanding of some of the harder people challenges, such as trust issues, meta-cognitive laziness, cognitive offloading, and validation risks.
[Bonus points] – Human-in-the-loop requirements, Evaluation standards, and ideas to overcome the very real human-specific challenges.
Question: What are some principles you follow when developing AI products?
Why this question? This will tests someone's understanding of how and why generative AI is different than other tech builds.
Good answers could include:
Trust in the tool can make or break it – the product shows its work, escalates without challenges and can be 'taken back' if required without friction.
Context as king – The ability to remember its users and their context in each interaction is key, as well as who they are within a workflow and organisation. No one wants to 'start from scratch' with each interaction.
Problem recovery – Thinking about how and when to test, take back, and ship with use of a sandbox and keeping humans in the loop.
Actual success metrics – Not just 'use' or number of licences, but the ability to track and measure how and when the product is being used, errors and escalations, long-term benefits tracking.

Questions to demonstrate a wider understanding of AI
Question [Self]: What is your favourite AI model or tool and your personal use case for AI?
Why this question? You get a glimpse into someone's willingness to experiment with AI and if they are self-aware / reflective of how this technology impacts their daily lives.
Good answers should include: A specific named model version (i.e. 04 mini high, 3.7 Sonnet) or AI-enabled tool (i.e. – HeyGen, Granola or one of the +80k options), coupled with a clear articulation of value it brings to their lives. Brownie points if they: 1) combine using different models / tools within a unique workflow, 2) have vibe-coded their own AI-enabled too, and / or 3) created their own agent on a platform like Manus, Crew.ai, LangChain, etc.
Question [Companies]: Name an example of a company that is using AI and announced it to the world, highlighting why you think it worked or didn't?
Why this question? A person who is genuinely curious about AI will be watching and learning about what organisations are doing with AI, and watching the effects of announcing it publically.
Good answers should include: A specific company, what they did / didn't do, and why it was a success or fail. Common examples could include:
Air Canada – Hallucinating chatbot which gave a customer the wrong info about a fare claim, said it wasn't their model and the Supreme Court still held the airline accountable for the outputs of the AI-driven model.
Klarna – Eliminated new roles and replaced 700 customer service agents with AI, only having to hire back real humans as it was such a fail.
AI-first public announcements (Shopify – AI use expectations and hiring freezes, Fiverr – AI will take all your jobs, Duolingo – TikTok campaign to mass-delete the app).

Question [Agents]: Talk to me about autonomous Agents? What are some of the benefits and concerns?
Why this question? Everyone in the tech world is talking about 2025 being the year of the Agents. This will assess if they are up to date with what's happening with frontier firms, model functionality, and are genuinely interested in the future of how AI will impact work.
Good answers will include: An understanding of Agents (vs. LLMs vs automation) and some of the current functionality / platforms available to the public. Even better if they can highlight some of the challenges both from a human and technical perspective (i.e. – MCP, A2A). Agents are very early days, so be impressed with some knowledge and intentional reflection on how Agents will potentially transform work.
I am very sure this list will evolve as time goes on and AI capability develops over the months and years ahead. Let me know if these are useful and I would love to hear your experience with these questions when you use them!
Human inspiration for this post: As I said on LinkedIn, a conversation with my friend and former colleague who I know will want to stay unnamed :-).
Thanks for putting all this together in one place. I have been thinking about this topic in the context of AI adoption and has several of the same questions in my list ! Love the way you think
Brilliant piece Robyn 💯
This hits exactly why most AI rollouts fail - companies focus on the shiny tools instead of building AI-powered teams. The real challenge isn't technical integration, it's human capability scaffolding. We need frameworks like yours to identify and develop the configuration skills that make AI actually work in practice.