The discourse around AI is full of extreme opinions.
From one end of the spectrum, AI is a technology that will change the course of humanity.
Sometimes for the better. It could create rapid progress in drug discovery, provide potential improvements in mental health and promote greater access to education and tutoring. Or it could change humanity for the worse. It’s often predicted that AI could upend most societal norms and structures, create mass joblessness as AI replaces knowledge work and has the real potential to cause civilization-harming damage.
On the other side of the debate, AI is not new. It’s entirely overhyped, amounting to nothing more than advanced statistical models or another type of perceptron. This side of the argument believes that AI will not translate to material differences in the way we work or live our lives.
Personally, I fall somewhere in between. Generative AI is not nothing. I believe it has the incredible potential to fundamentally shift what we value in work (i.e. - knowledge, expertise), and HOW we work day to day. But, I also believe that change takes time. While the frontier models are evolving at breakneck speed, I do think it’s going to be a few years before we have mainstream AI adoption in the workplace.
I’ve struggled with articulating my ‘middle of the road’ viewpoint. Having a more nuanced perspective in a world of clickbait and extremes makes it more difficult. That is, until I read ‘AI as Normal Technology’ by Arvind Narayanan and Sayash Kapoor.
This essay argues that Artificial Intelligence is best understood as a normal technology with gradual, decades-long impacts through societal diffusion rather than as an imminent superintelligence, advocating for resilience-based governance approaches over nonproliferation strategies. As they say:
To view AI as normal is not to understate its impact - even transformative, general-purpose technologies such as electricity and the internet are “normal” in our conception.
Narayanan and Kapoor talk so much sense about AI’s potential impact work. Specifically, they speak about the concept of ‘diffusion’, or the broader process of adoption and adaptation across society. This differs from ‘invention’, which refers to the development of new AI models and improvements in AI capability, and ‘innovation’, meaning the development of AI-driven products and applications for business and consumers.
This framing has helped me make sense of my lived anecdotal experience when it comes to AI. We are seeing inventions weekly, with a new frontier models being released from OpenAI, Google, Antropic and others every few days. There is a vast proliferation of innovations as well, with approximately circa 70,000 AI start-ups around today (as of April 2025).
Yet when I look at AI adoption - both by individuals and companies - it doesn’t feel like much progress has been made at all. Organisations talk a lot about their AI strategy, but it’s top-down and quite vague in terms of the genuine value it will create.
Individuals say they ‘use AI’, but if you dig below the surface, you realise the majority are using it for admin and basic tasks. Few are spending the time and effort required to use these generative platforms to their full potential or critically look at their own work to re-design HOW they work with AI (spoiler: this takes A LOT of experimentation). Narayanan and Kapoor highlight this observation, arguing that even with 40% of U.S. adults using generative AI, this only translates to 0.5%-3.5% of work hours.
Diffusion matters. The reality is that the speed of AI adoption and adaptation is:
"inherently limited by the speed at which not only individuals, but also organisations and institutions, can adapt to technology… it takes time for people to change their workflows and habits to take advantage of the benefits of the new product and to learn to avoid the risks.”
Historical patterns show general-purpose technologies take decades, not weeks or months, to fully diffuse. The essay provides electrification as an example, where productivity benefits took decades because it required redesigning factory layouts, workplace organisation, and training practices. I would argue the same goes for the internet, which took years before we reshaped sectors and workflows, and for it to become the norm in individual lives and organisations.
The same could be said for AI. It could take some years for it to become fully embed in life and work.
However, that doesn’t mean that we can take our eye off the ball and leave AI diffusion to chance. The potential for significant disruption and negative impacts caused by AI is too great. We’ve seen this before, with the Industrial Revolution completely upending the definition of work. Things got worse before they got better, and the impacts were felt differently:
All of this points away from the likelihood of the automation of a vast swath of the economy at a particular moment in time. It also implies that the impacts of powerful AI will be felt on different timescales in different sectors.
In other words, AI is not going to take ALL of our jobs tomorrow. Or next year. And while some jobs may go, a lot of jobs will shift. Human jobs will be increasingly about working with and controlling AI, and we will need to be better at defining the tasks to be done. Put simply, the nature of what we consider ‘work’ is set to change.
With Normal Technology, it’s hard to predict the future. We couldn’t fathom revolutionary use cases with the invention of electricity or the Internet, and we often got them wrong (hello e-commerce being the end of all physical shopping). I believe the same is true for AI.
We are likely to have more time than the market hype would lead us to believe.
BUT, we must be ruthlessly intentional about how we drive AI adoption and adaptation, right now.
There are many things we can do, organisationally and societally, to ensure the impacts of AI are beneficial rather than detrimental. AI as Normal Technology outlines many at the societal level, but there is so much you can do today to prepare for an AI augmented future. You don’t have to wait.
For those who don’t use AI often, experiment with models and AI-enabled applications daily. Spend time considering the way you work and how AI could complement what you do. Working with the technology helps you understand its current limitations, and where it can beneficially augment your work. Reflect on how to get the most out of AI and yourself, without losing what makes you uniquely human.
For those who are well versed in the models, it’s time to start designing agents and vibecoding. Manus is a great place to start, but there are dozens of other platforms vying for users to design agentic workflows. Vibecoding, or the process of building tech using natural language, is another great exercise to help you understand how these platforms work in relation to code and give you a crash course in code-related prompting (and ‘throw your laptop out the window’ levels of frustration sometimes debugging).
In embracing the idea that AI as Normal Technology, we free ourselves from both inflated expectations and apocalyptic fears, allowing us to focus on the practical work of adapting our skills, organisations, and society to a technological transition that will unfold over years, not months.
So, in my view, you can breathe a bit easier. While AI is moving faster than we could have imagined, diffusion will take some time.
We have the change to address immediate challenges with AI. And time to lay the foundations for a future where human and artificial intelligence complement each other in ways we're only beginning to imagine.

Human Inspiration for this piece: AI as Normal Technology
You can listen to Arvind Narayanan speak about his work on the Hard Fork Podcast (one of my favourites), here: