Last week, I heard a quote on OpenAI’s latest podcast where Chief Economist Ronnie Chatterji talked about the need for human skills: networked, high EQ, critical thinking, decision making, etc.
‘If humans become complements to [AI] intelligence and leverage it with agency, that’s going to be the unlock’.
It’s proper tech bro speak, and I think woefully under-values what humans bring to the table. When interpreted, I agree though with the general sentiment:
Humans need to use AI with judgement and discernment. The massive intelligence power of AI is the most valuable when it is paired with a human with the right soft/social skills.
I am a big advocate for everyone learning AI skills, both in terms of WHAT AI is and HOW to use it. I believe that someone’s use of AI will be tested as standard in most knowledge work job interviews very soon. Many people are calling this new skill set ‘AI fluency’, or put simply, someone’s ability to understand and use AI effectively.1
But, as important as building AI literacy and technical skills are, they aren’t used in isolation. For us to get the full benefits of AI, we still need to use our human skills as well.
And the ability to use those human skills in the right context, in collaboration with generative AI, is what I am calling ‘Human Fluency’.
What are human skills?
In my last Substack, I provided a framework for individuals to help navigate their own careers in the Age of AI. One quadrant was named ‘Human skills’, which are skills that can be mimicked by AI, but not necessarily easily replaced.
I’ve tried to go deeper on what these skill are, which I find helpful to think of in three different but related categories:
‘Soft skills’ - These are skills that are not technical or related to a specialist function, but are considered to be essential within modern knowledge work. Some examples of soft skills could include communication, collaboration, empathy, emotional intelligence, the ability to manage people, conflict resolution, etc.
‘Big picture’ skills - I define these skills as those associated with problem solving in different contexts, accounting for elements like time, expertise and unexpected variables. These are big picture skills that are often described as ‘types of thinking’: creative [thinking], innovative [thinking], systems [thinking], strategic [thinking], analytical [thinking], etc.
‘Social power’ skills - This category is defined loosely as the emergent skills required in the age of social media/internet. These skills could include taste, discernment, judgement, social influence, the ability to attract and retain (ideally positive) attention, authenticity, and facilitating trust (among teams, organisations and people). Of the three categories, these skills are harder to define and are rapidly evolving, but becoming more critical with each day.
Why can't AI replace uniquely human skills (yet)?
I believe in humans. And I believe that some things that humans are better at than AI, both due to the physical advantage of being in the non-digital world, but also because I believe there are some things about being human that cannot be programmed. Some examples include:
The physical connection between people - the kind you get when sitting across from someone in real life, and being really listened to. It goes well beyond understanding what someone is saying. It’s about being fully experienced and seen, in a way that transcends verbal communications.
Human ‘instinct’, which of course is fallible, but also takes account of 1000s of intrinsic and external data points to give you that gut feeling. When you learn to really listen and interpret it, it’s rarely wrong.2
Understanding and demonstrating taste. Not what the AI recommends or the advertising tells us, but an actual person who discerns for themselves what’s cool and what’s not, or does this for others.
Sitting with and solving big, complex, multi-variable problems, for hours, days or sometimes weeks. AI struggles with context and memory, which limits its ability, and right now, AI still needs us to make sense of the physical world for it (Sam Altman and Jony Ive are looking to change that however).
Yes, AI can mimic these skills today. Some very convincingly. AI is getting incredibly good at avatars and deep fakes, and it’s exceptional at telling people what its users want to hear.
AI’s mimic has limitations. I know some tech bros may argue otherwise, but the early signals are there. We are seeing examples of AI-related psychosis, where someone’s AI use is causing them to break from reality (sometimes to horrific consequences). Early studies show that using general generative AI for therapy (like ChatGPT) can do more harm than good. We are also seeing growing backlash to AI slop flooding our social feeds.
I believe that humans want to learn from, and be entertained by, other humans. I believe that authentic human connection and content will be increasingly valued, both socially and in the workplace.
So even if we can’t predict exactly how an AI-enabled future will unfold, we can make the choice that some skills should remain and are best done by people. And those skills should be rewarded and valued, as they will be essential to creating AI-augmented work.
Where Human and AI Fluency meet: Collaborative Intuition
Framing humans as ‘AI complements’ undermines the value of what us bipeds can bring to the table. Rather, us humans will need to master both "fluencies” if we are going to get the most benefit from AI, and thrive in an AI-augmented future.
Human fluency provides the capability, context, understanding and real-world application that AI is lacking and / or amplifies what AI can achieve. Human fluency also has the potential to give us meaning and purpose back to our working lives by indexing on soft, big picture and social power skills in our day to day.
AI fluency helps us humans extend what we can imagine by knowing how and when to use AI, and access knowledge and insights faster and at scale yet unrealised.
Those of us who foster both human and AI fluency will develop a sort of collaborative intuition, what I define as a sort of intrinsic understanding of when AI will be better suited for a task over humans, and vice versa.
Collaborative intuition will help us know when to lean into AI vs. when to trust our human instincts. It will help us design feedback loops between humans and AI that support learning and growth. Collaborative intuition helps us make the most of AI capabilities, values the unique contributions of both, and create something new beyond the sum of what AI and humans can bring to the table.
Remembering why Human Fluency matters
We don’t need to choose between human or artificial intelligence. We all should be actively building a future that makes the most of each’s incredible and unique capabilities.
I personally want to live in a world where the wildest promises of AI can be realised AND where the inexplicable, unique aspects of being human - the kind that comes from being alive in a body, in relationships, and in the messy beautiful world - are valued, by both society and the workplace.
It is the space between uniquely human skills and AI that we will discover possibilities neither could achieve alone. And only when we have honed both our AI literacy & technical skills and our human skills are we able to effectively collaborate in ways that make us not just more productive, but more profoundly human.

Anthropic dropped an excellent online course called AI fluency, where you will: ‘Learn to collaborate with AI systems effectively, efficiently, ethically, and safely’.
Note this is a skill that capitalism has actively discouraged and socialised out of us. Logic reigns supreme, and following your ‘heart’ or ‘gut’ has been disregarded as too ‘woo’ or irrational.