The Emperor’s New Algorithm
Australia’s new Expert group shows the Minister’s naked truth.

I found myself contemplating Hans Christian Andersen’s “The Emperor’s New Clothes” as I read through Minister Ed Husic’s announcement of the new Artificial Intelligence Expert Group.
It’s comprised entirely of academics.
Andersen’s narrative, while centuries old, strikingly mirrors the current state of AI governance in Australia – a tableau where the allure of intellectual prestige might just blind us to the naked realities of true visionary governance.
Instead of thoughtful and practical solutions, we seem to be chasing a notion of Responsible AI without understanding what it means, or how regulating “high risk” applications will lock out innovative startups; securing industry giants with a substantial moat.
Those at the table make the decision

My AI journey in Australia has been a testament to the power of diverse voices in shaping technology’s trajectory. I’ve met some fantastic individuals and truly innovative businesses making waves in their respecting industries.
There is so much going on here that I am excited to share that I’m planning on running a whole series on the different companies Australia has spun out thanks to AI as an engine.
It’s so much more than just ChatGPT or OpenAI wrappers.
So you can imagine my surprise when Minister Husic unveiled the AI Expert Group on the 14th of February.
While the group contains an impressive round up – a rich tapestry consisting of a who’s who of scholarly might – it resonates with the silence of the absent threads. The rich wool of industry innovators, startups, and technologists who navigate the practical chasms of AI’s potential and peril daily is glaringly absent.
These are the ones who are risking it all to bring the technological across the valley of death.
This cloth, much like the emperor’s invisible suit, seems to float on the vanity of academic accolades. I’m left wondering, are we adorning our AI governance policies in finery that the rest of the world cannot see?
The Dissonance of the Unheard Child’s Perspective

In Andersen’s tale, it was the unfiltered clarity of a child that unveiled the emperor’s folly to all. The child’s innocence was what ultimately enabled the truth to be heard.
The emperor worn no clothes.
I speak with AI pioneers daily. Their insights crisp with the pragmatism of experience, often diverging from the theoretical paths laid by academia.
Many have had journeys that do not align with social norms.
Some have come from communities of practice. Others from under represented minorities who have seen a need. Many are simply mums and dads with an idea and the means to now execute.
Their absent voice in the Expert Group strikes me as an oversight, a collective turning of the nation’s gaze away from the essential candor that could safeguard our technological sovereignty.
It’s a stark reminder that the most critical voices are often those not echoed in the halls of policy but found in the workshops of innovation.
The press release, while outlining the group’s immediate priorities, overlooks the strategic importance of developing sovereign AI capabilities.
In a global landscape increasingly defined by technological self-reliance, Australia’s focus should not just be on setting guardrails but also on fostering an environment where homegrown AI can thrive.
This approach not only secures economic and national security interests but also ensures that Australian AI reflects the unique cultural and social fabric of the nation.
To Weave a Fuller Tapestry

I’m drawn to the notion that AI governance, much like the finest cloth, requires the interlacing of diverse fibers.
The inclusion of industry voices in the Expert Group is not merely a gesture of inclusivity, but a necessity for crafting policies that resonate with the texture of real-world application. These aren’t the only threads notable in their absence.
I recently have begun experimenting with locally run language models on my laptop.
There is so much untapped potential when we can embed models directly on a device. The benefits that can flow to Australia’s regional and remote communities are yet to be realised.
Imagine deploying a locally run LLM to the APY-Lands. Suddenly we have not just a mixture of experts model deployed, but we have the capability to use these experts were we have traditionally struggled to attracted professionals.
We can get lawyers, doctors, engineers, teachers and writers deployed with a simple key stroke.
The potential to close the gap has never been greater.
There is potential for AI to serve as a platform for cultural preservation and social equity.
The inclusion of indigenous knowledge systems and perspectives in the AI governance dialogue is not just an opportunity but a necessity. I’m not simply saying a token elder is enough. True representation is needed.
There is potential here to address every cultural inequality that has been introduced since the White Australia policy.
The voices of the minorities who stand to gain the most from this new technology need to be represented in any discussions.
AI has the potential to address longstanding social, economic, and environmental challenges. However, achieving this requires a governance model that is as dynamic and multifaceted as the technology itself.
Without the right mix of voices in this narrative we risk losing sight of the true power AI can bring. We risk regulation for regulations sake only offering to extend the advantage of the already advantaged few.
The rich narratives and values of our diverse nation and its communities should be brought into the very fabric of Australian AI.
I envision a future where technology not only advances societal interests but does so in a way that respects and celebrates our nation’s and its people’s diverse heritage.
A bigger table needs a larger cloth

I’m no seamstress. Nor am I a tailor. But I am left to ponder the path forward for AI governance in Australia.
I’m reminded from my understanding of systems that the true strength of our policies lies not in their theoretical elegance, but in their pragmatic efficacy.
Including the wisdom of those who forge AI’s future in the crucible of innovation, we can clothe our governance framework in a tapestry that truly reflects the depth and diversity of Australian ingenuity. By adding the voices so often overlooked from our communities we can truly bring about policies that will close the gaps left behind by our own in ability to see the forest from the trees.
Dr. Richard Matthews is the founder and CAIO of RHEM Labs. As a military veteran he created LoganAI, an innovative AI companion designed to assist in high-stress communication situations. With personal experience navigating mental health challenges, he’s reshaping disability perception through accessible AI solutions at RHEM Labs Pty Ltd. Driven by empathy and cutting-edge technology, his work aims to ensure no one feels alone in their journey.