Home/Past Events/December 2025
Christchurch AI Meetup · December 2025

AI Reasoning & Context Engineering

Monday, 1 December 2025 · EPIC Innovation Centre, Christchurch

🧠 Andy Masters — Hierarchical Reasoning Models & the Christmas Tree Packing Challenge·🎤 Caelan Huntress — Context Engineering
AI Reasoning & Context Engineering — Christchurch AI December 2025

Christchurch AI Meetup Recap — December 2025

Talk 1

Hierarchical Reasoning Models: Small AI That Beats the Giants — Andy Masters

Andy Masters — Lead Solutions Architect at Caitlyn AI — delivered one of the most technically adventurous talks in the meetup's history, weaving together cutting-edge AI research and a festive Kaggle competition: the Santa 2025 Christmas Tree Packing Challenge.

The challenge: fit as many arbitrarily-rotated Christmas trees as possible into the smallest possible box, solved for 1 to 200 trees. Andy turned this into a live audience participation event, deploying a fully vibe-coded interactive web app — built entirely through prompting with no hand-written code — that let attendees compete in real time to pack trees manually, with chocolate prizes for the winners.

Behind the game was a rigorous exploration of hierarchical reasoning models (HRMs) — a class of small, domain-specific AI models that can outperform giant LLMs on abstract reasoning tasks. Andy traced the evolution from a breakthrough October 2025 paper describing a 27-million parameter supervisor/worker model architecture, to a follow-up "Less is More" paper that simplified the approach to just 7 million parameters — doubling performance by making the model recursive rather than hierarchical.

The key insight: by tokenizing problems in their native abstract space rather than encoding everything into English, these tiny models achieve remarkable results. A 7-million parameter recursive reasoning model outperforming Gemini on logic tasks. Running on a consumer GPU. Trainable at home.

Andy's live demo showed a genetic algorithm-based training data generator he'd built to teach a Tiny Recursive Model (TRM) — vibe-coded in Python without writing a single line of code himself — and his position on the Kaggle leaderboard (~600th out of 1,200+) after only generating training data without yet implementing the TRM itself.

His closing message: "It sounds like quite a hard problem and it is — but you can get so far with vibe coding."

Talk 2

Context Engineering: The AI Skill That Actually Matters — Caelan Huntress

Caelan Huntress — Head of Learning & Enablement at Agentic Intelligence, AI educator and public speaker — closed the evening with a talk on context engineering: the practice of deliberately shaping the information environment you give an AI to dramatically improve its outputs.

Where most people think of AI capability as a function of the model itself, context engineering flips the perspective: the operator is the bottleneck. The same model, given richer, better-structured context, produces categorically different results. Learning to craft that context is the core skill of the proficient AI operator.

Caelan introduced his PILLARS framework — a structured approach to prompt construction that ensures AI has everything it needs to perform at its best: Persona, Intent, Length, Language, Audience, Restrictions, and Style. He demonstrated how applying this framework transforms vague, generic responses into targeted, actionable outputs.

The session also previewed the 2026 calendar — including the February meetup on AI data testing and coaching with AI — and introduced the AI Coaching Power Hours running through January: free weekly Zoom sessions for hands-on AI practice.

Key Takeaways for the NZ AI Community

  • 🎄 Vibe coding works for serious engineering — Andy built a full genetic algorithm optimiser and interactive web app without writing a single line of code
  • 🤏 Tiny models can beat giants — a 7M parameter recursive reasoning model outperforms Gemini on abstract logic by reasoning in native token space, not English
  • 🔁 Recursion over hierarchy — the "Less is More" paper showed that simplifying HRMs to recursive models doubled performance with a fraction of the parameters
  • 🏗️ Context is the product — the quality of AI output is a function of the context you give it, not just the model you use
  • 🎯 PILLARS framework — Persona, Intent, Length, Language, Audience, Restrictions, Style — a repeatable structure for prompts that actually work

Watch the Full AI Meetup Recording