The AI Tension —
Implementation vs Existential Risk

Artificial intelligence is advancing rapidly, and with it comes a growing tension: how should we build it — and what happens if we get it wrong? This month’s Christchurch AI meetup brings together two perspectives that rarely share the same stage: practical implementation frameworks, and the existential risk case for taking AI danger seriously.
The Talks
Beyond Prompting: Multi-Dimensional AI Literacy and New Operating Models for the AI Era
Dr Elsamari Botha — Beyond Prompting: Multi-Dimensional AI Literacy and New Operating Models for the AI Era
AI is already being deployed across industries — but implementation is not just technical. It’s organisational, ethical, and strategic. The integration of AI into organizations represents more than a technological upgrade; it demands a transformation in how we work, decide, and deliver value.
In this session, Dr Elsamari will define what multi-dimensional AI literacy really means beyond prompting, present frameworks for building AI capability across organisations, explore new operating models for the AI era, and discuss the skills employees need to thrive in AI-enabled contexts.
The Doom Thesis — Why “If Anyone Builds It, Everyone Dies”
Hazel Shanks — The Doom Thesis
What if the biggest risk isn’t misuse — but success? Hazel introduces the Doom Thesis: the general argument for existential risk from advanced AI.
Together, we will explore the plans to build superintelligence at AI labs, the core argument behind the doomer perspective, whether current governance approaches are enough, what it would actually mean to pause or restrict AI development, and whether the Yudkowsky book is worth reading.
The event argued that AI success depends less on better prompts and more on building the leadership, governance, coordination, and judgment systems that make powerful tools safe, shared, and strategically useful.
Key takeaways from Dr Elsamari Botha’s presentation, Beyond Prompting: Multi-Dimensional AI Literacy and New Operating Models for the AI Era
- AI adoption is failing because organisations are optimising individuals, not systems
- Frontier firms treat AI as institutional capability
- Big difference between AI as personal productivity and AI as organisational infrastructure
- One person’s insight becomes everybody’s baseline.
- Governance must be tied to value, not compliance theatre
- The missing layer is coordination
- AI literacy is too narrow; AI fluency needs judgment
The second speaker of the evening, Hazel Shanks, asked the question: What if the biggest risk isn’t misuse—but success?
Hazel introduced The Doom Thesis, an overview of Eliezer Yudkowsky’s book, “If Anyone Builds It, Everyone Dies”.
The doom thesis rests on three big claims:
- Intelligence is real and powerful. Humans dominate the planet because intelligence beats other advantages.
- AI capabilities are still scaling rapidly. She argued there is no obvious wall yet, with benchmarks and task horizons continuing to rise.
- AI systems are grown, not crafted. We train models through optimisation, but do not fully understand what is happening inside them.
I’ll be giving my rebuttal to the Doom Thesis during TechWeek at the EPIC AI Conference in my presentation The Case for a Superabundant Future. You can see the agenda for the full-day conference on Thursday, 21 May at the EPIC AI Conference page.