Welcome to The Monthly Signal
This is Innovation Download — a monthly update on what's moving in Australia's startup and innovation ecosystem, translated into everyday takeaways worth considering.
Something happened last year. It really feels like something changed with the release of Anthropic's Opus 4.5 model. On the back of that, we've seen a rapid acceleration of the new tools that go beyond simply prompting to get an answer.
In fact, it's been a tidal wave. The infrastructure surrounding how we work is changing (again), and this time it doesn't wait to be asked. All of a sudden, the agentic world came to life. It just started working.
To understand why this moment feels different, it helps to reflect on how we got here. GPT-1 laid the groundwork for a new way of working, a 117 million parameter model trained on over 7,000 books. Quietly impressive, but narrow. Then ChatGPT arrived in late 2022, built on 175 billion parameters, and proved the concept to the world, triggering a wave of adoption that hasn't slowed since. The underlying models expanded frighteningly quickly, from hundreds of millions to billions and now potentially trillions of parameters. But for all that power, most of it still required a human in the loop. You still needed to write the prompt, review the output, and ultimately put the pieces together yourself.
That's the part that's just changed.
It took four years to get from GPT-1 to ChatGPT. It's now taken just under two years for the agentic world we were promised to go from concept to tools that can run autonomously across your entire work stack.
The iteration cycles are shrinking rapidly. The leaps are getting bigger. AI isn't just moving fast; it's getting faster at getting faster. It's easy, in hindsight, to look back at 2018 and recognise GPT-1 as the beginning of something enormous. But the speed with which we got here is, well, crazy.
As one might expect, this has had a number of second-order effects. The need for compute, energy, and the physical layer of data centres is growing faster than most predicted. This is where the hardware story meets the software story, and where Australia finds itself in an unusually strong position — with AI data centres being established across the country at a pace that would have seemed unlikely two years ago.
The infrastructure is arriving. The agents are here. Infrastructure and access don't automatically produce capability. But the opportunity is here. Whether that opportunity is captured depends entirely on the user sitting on top of it all. It turns out it's "turtles all the way down."
Stories On Our Radar
Australia's AI infrastructure bet is getting serious (Firmus)
The rundown: Project Southgate is Firmus Technologies' national AI infrastructure program, built in partnership with CDC Data Centres and NVIDIA. The first stages are under construction in Tasmania and Melbourne, delivering up to 150MW and 18,500 NVIDIA GB300 GPUs expected online by April 2026. Total planned capacity scales to 1.6GW by 2028, backed by an initial $4.5 billion investment and a potential build-out to $73.3 billion. All facilities will run on 100% renewable energy, and the project is majority Australian-owned.

Takeaway: Australia is building world-class AI infrastructure at the same moment its workforce is being reshaped by the very technology that infrastructure supports. WiseTech cut 2,000 jobs citing AI transformation. Block cut 4,000. Given the trajectory AI is on, it's no surprise, but the question sitting underneath all of it is whether the people being displaced will upskill fast enough to stay relevant — or whether the path of least resistance for most organisations is simply to hire more agents instead.
Agents are officially here and Notion just showed us what that actually looks like (Notion)
The rundown: Notion published a behind-the-scenes look at how their new custom agents are running across their own teams. Knowledge bots answering repeat Slack questions, feedback routers triaging incoming work, security agents filtering noise from real alerts. The agents work across Slack, Calendar, Mail and external tools, so the context your team has already built travels with them.

Takeaway: What Notion is really showing here isn't a productivity upgrade. It's the early version of a single operating system for work, one that spans every platform and absorbs every repetitive task. What's interesting is who's building them.
This (just like Claude Cowork) is built not just for engineers — anyone with a clear goal and a conversational prompt can now make agents work for them. The LLM itself will help you spec and build the agent if you tell it what you're trying to solve. The barrier isn't technical anymore, it's knowing what problem you actually want to hand off. As some have said, this really is the time of the proverbial "ideas guy".
We were lucky enough to host Notion at CDH last week to get a hands on look at their new agent platform. Here's a wrap of the event.
Anthropic ships Claude for your desktop — and the coworking space (Claude)
The rundown: Anthropic launched Claude Cowork in January, describing it as Claude Code for everyone who isn't a developer. Where a regular Claude conversation is a back-and-forth, Cowork can be given access to a folder or files on your computer and works autonomously through it — reading, editing, creating and organising files while you queue up the next task.

Takeaway: The agent conversation has largely been had by people who are "technically inclined" — developers, founders, and early adopters comfortable with new interfaces. Cowork shifts that. When AI can work directly inside Excel, Word, or a folder of PDFs on someone's desktop without requiring them to learn a new tool or change how they work, the addressable market expands dramatically.
When products expand their addressable market, new questions arise. Excel proficiency has been a career cornerstone for decades. Financial modeling, data analysis, reporting — entire professional identities have been built on the mastery of tools that Cowork can now navigate autonomously. Where does that leave the millions of workers who spent years becoming genuinely excellent at the tools AI can now operate? And if the barrier to entry drops, does the premium on specialisation drop with it? Maybe. But what gets rewarded next isn't the ability to use the tool. It's the judgment to know what to build with it, what questions to ask of it, and when to steer it in the right direction. Just like using an LLM to write, the skill isn't in the output — it's in knowing when to step in.
Podcast Episodes Worth Your Time
OpenClaw: The Viral AI Agent that Broke the Internet - Peter Steinberger | Lex Fridman Podcast #491

Peter Steinberger, creator of OpenClaw, spends much of this conversation tracing a line between curiosity and agency — the idea that the best things get built not because someone planned them, but because they were annoyed something didn't exist yet.
OpenClaw started as a WhatsApp relay to his own computer while travelling in Marrakesh. When he accidentally sent a voice message the agent had no support for, it independently figured out the file format, found his API key, and transcribed the audio via Curl. He hadn't taught it any of that. From these humble (and surprising beginnings) what is now OpenClaw was born.
The conversation gets more interesting when Steinberger describes an agent modifying its own source code — not because he programmed it to, but because it understood its own architecture well enough to rewrite itself. His distinction between agentic engineering and vibe coding runs through everything. Agentic engineering is knowing the system well enough to direct it. Vibe coding is prompting blindly and hoping. The output might look the same. The control is completely different. And that distinction maps directly onto the question this month keeps circling: as agents take on more of the work, what does the human in the loop actually need to bring?
His answer, implicitly, is understanding. Not just of the task, but of the system executing it. His claim that agents will replace around 80% of existing apps lands differently when you consider he built a billion-device product from scratch. This isn't speculation from someone on the sidelines. It's pattern recognition from someone who's shipped at scale — and who can see clearly what the current tools are pointing toward.
The bottleneck isn't production anymore. It's judgment. The teams that get the most from agents won't be the ones who prompt the most. They'll be the ones who understand their systems well enough to know when the agent is right, when it's wrong, and when to step in. That's the question this issue keeps returning to. The hard part was never the data centres or the compute. It's the human layer sitting on top of all of it.
Helpful Resources:
💻 Claude Cowork Prompt for LinkedIn hooks by Ruben Hassid
📝 How to create a content IP by Brendan Hufford
🤖 Best practices for creating and optimizing a Custom Agent by Notion
💻 10 Claude Cowork Workflows that actually work By the AI Corner
🎉 6 months of free Notion Business for all CDH friends (T&Cs apply)
🎮 Join us for Best of NVIDIA GTC — and be in the running to win a DGX Spark personal AI supercomputer