Wasting Time on AI Regulation and Sovereign AI in New Zealand

A consolidated argument for why New Zealand needs coordinated action on AI policy, and a proposed framework for achieving it.

Author: Tom Barraclough, Brainbox Institute
Version: Consolidated from blog series and discussion papers (October 2025)


At a Glance

New Zealand has adopted a distributed approach to AI regulation—no dedicated "AI Act," but instead reliance on existing general-purpose legislation. This is a sensible decision. However, it creates significant challenges for anyone trying to understand what rules actually apply to AI, how different groups can work together productively, and what overall direction we're heading in.

The core argument is this: We are wasting time on AI regulation and the possibilities of Sovereign AI because of four interconnected problems—an Information Problem, a Coordination Problem, an Economic Problem, and a Policy Problem. These problems reinforce each other. Addressing them requires deliberate, systematic effort.

The proposed solution: Adopt "Sovereign AI" as a guiding policy vision—not necessarily meaning New Zealand must build its own foundation models, but rather empowering New Zealanders to have greater agency over AI systems through literacy, infrastructure, and governance. This vision can coordinate activity across government, industry, academia, and communities.

Concrete initiatives are already underway or proposed to address each problem, ranging from collating regulatory materials in machine-readable formats to conducting market studies on digital infrastructure requirements.


Part 1: The Problem

Why This Matters

In its AI Strategy, the New Zealand Government has emphasised economic growth potential while taking a distributed approach to regulation. We won't be deploying a dedicated, cross-cutting AI Act like the European Union. This approach has consequences.

Even people wanting to adopt and deploy AI responsibly face a difficult task. If they want to understand what they can or can't do, how are they meant to find out? What substantive support exists to drive adoption beyond self-help guidance? A diverse range of regulatory material on AI is littered all over the place—at least ten statutes, plus a significant volume of softer regulatory instruments and guidance from government, private sector, and other bodies.

This creates four high-level problems that undermine meaningful effort on AI policy.

The Information Problem

New Zealand's distributed approach to AI regulation means relevant information is published widely by many institutions. Anyone wanting to engage in productive discussion about AI governance must traverse at least ten statutes, numerous guidance documents, professional standards, and materials from multiple agencies and sectors.

The practical consequences are significant. Relevant information gets overlooked. Few people noticed the dedicated framework for "automated electronic systems" that has governed AI systems like the e-Gates at our borders since around 2018. It's difficult to know whether non-statutory information on AI is still relevant—some materials were prepared when "AI" meant "predictive analytics" and are now basically out-of-date, but knowing this requires substantial contextual knowledge.

Background analysis exists within government on why various statutes are relevant and how they apply, but this hasn't been made public—a missed opportunity to save others from duplicating that work.

The challenge runs deeper. Even if you can bring all relevant information together, coming to grips with it is a massive task requiring time, money, and significant cognitive exertion. While some might think AI itself can help with this analysis, any work with real-world consequences requires explanation and verification. How much are you willing to bet that an AI system had access to all relevant information, properly understood your context, and produced a reliable answer? Given how hard it is to find everything already, people delegating this task to AI should proceed cautiously.

This information problem creates an uneven playing field for policy discussions. People who can't find or grapple with all the relevant material come to discussions at a disadvantage, hampering their ability to contribute informed perspectives. Otherwise valid viewpoints get disregarded as insufficiently informed. This is inefficient and disabling, driving frustration, undermining trust, and complicating good-faith collaboration.

The Coordination Problem

When it comes to AI regulation and the concept of AI sovereignty, time could be better spent if different groups coordinated more effectively.

The starting point problem. Regulation for AI already exists—the volume of instruments that could apply is enormous and overwhelming. The real task is implementing existing regulation systematically, efficiently, and effectively. Modifications will be necessary over time, and regulation can enable rather than inhibit adoption by providing certainty and a common baseline. But coming at this discussion on the basis that AI is completely unregulated is unproductive. Whenever someone advocates for AI regulation, it's often unclear whether they appreciate the existing regulatory landscape.

The information re-use problem. Many organisations hold fantastic knowledge and insights on the current state of regulation here and overseas, but this knowledge is hard to share. For any output on AI regulation published by the public service, there's usually a larger volume of background analysis that informed it but hasn't been made available. If information isn't shared effectively and people aren't specific in their advocacy positions, discussion becomes ambiguous, duplicative, and ineffective.

The systematic approach problem. Effective coordination requires a structured agenda that identifies who is working on what and which information resources exist already. In New Zealand, this should be achievable—the policy process is relatively transparent and well-structured, key relationships can be easily established, and relevant data should be available. Without such an approach, we go around in circles, duplicating work in some areas, skipping past others, and wasting time on the wrong questions.

The incentive problem. Different groups have different interests when it comes to AI. Participation in policy discussions is driven partly by public good motivations, but also by self-interest—commercial, professional, or reputational. This isn't inherently bad, but it affects how information is shared and coordination is pursued. The most useful information is often shared selectively, sometimes for legitimate reasons like protecting trust and confidence, sometimes because it provides competitive advantage. Participants with economic returns from their involvement have greater staying power than those without.

The Economic Problem

Everyone needs to generate money to pay their bills. This becomes a problem when different groups derive different economic benefits from participating in AI policy discussions, affecting the pace of their work, their priorities, and their ability to participate effectively.

Government and public sector participants generally receive salaries—dependable paycheques independent of the time work takes. They're also responsible for important processes that other participants can't see or don't understand. Combined with incentives around media exposure and political considerations, government's incentives are generally to move slowly unless some other factor accelerates activity. People in government find this frustrating too—no one deliberately wants to move slowly for its own sake—but the financial pressures they face are fundamentally different from other groups.

Industry falls on a spectrum. Well-funded businesses often earn money from the technology being regulated or from advising on compliance. Smaller businesses may be barely surviving while trying to establish market position. For all businesses, time and money spent on policy must be justified financially. Revenue has to cover expenses. This is a very real constraint for anyone bearing responsibility for paying staff each week.

Communities are often most affected by AI but completely under-resourced. They show up to discussions facing complex questions with none of the time, information, or support required to address them meaningfully. Everyone else speaks a different arcane language. Discussion revolves around barriers to action, writing new documents, or proposing non-specific things that can never realistically be implemented. Many real barriers are never spoken aloud—political considerations, media risk, market strategy. Community contributions are seldom compensated, even though the skills and experience they bring are often essential.

Academia faces perhaps the most difficult economics. Anyone who's chased research funding or juggled teaching, research, and service knows the challenge. Academics are expected to have done the reading—or written most of it—but face many of the same barriers as everyone else.

Multi-stakeholder bodies play essential roles but face their own pressures. They're expected to bring resources of well-funded companies or governments but frequently face conditions similar to other participants. They navigate conflicting pressures to create, disclose, and withhold information, and to initiate, avoid, or maintain coordination depending on circumstances.

Elected politicians face responsibility for taking a bird's-eye view across equally important and urgent priorities. Key motivating factors include media risk, political relationships, and election cycles. These can hamper information production and undermine coordination—or, alternatively, be powerful contributors to resolving these problems entirely.

The chicken and egg problem. Addressing the economic problem requires raising money and having institutional arrangements that inspire confidence among potential contributors. But answering the necessary questions from potential funders requires information, coordination, and resourcing in the first place.

The Policy Problem

If we can't articulate what we want from AI or AI regulation, we don't have something to coordinate around, can't determine what information is relevant, and have no way of making sensible and predictable value judgements.

The definitional challenge. What do we mean by AI? How do we include cutting-edge language models and predictive analytics systems while excluding email filters, thermostats, and spreadsheet formulae? What is "good" when it comes to AI, what is "bad," and how do we distinguish? Even with things like facial recognition or deepfakes, there are usually exceptions where people agree they might be permissible. It's easy to say systems should be fair and unbiased, but what does "biased" mean, and can a biased system be deployed where its bias can be accounted for and controlled?

The principles challenge. We've defaulted to principles and values as guides—the OECD principles at the centre of New Zealand's AI Strategy, roughly 200+ international statements of "AI principles" since 2017, and human rights frameworks. The trouble is that principles don't easily translate into clear rules. They give us a starting point for what matters, but the same principles can lead to radically different interpretations depending on who applies them and in what situation. The OECD principles are easy to agree to precisely because they're so open to interpretation—but countries endorsing them take wildly different approaches.

This makes it difficult to articulate a national or community direction for AI. Without a shared vision, it's hard to identify coordination points, set a structured agenda for investigation, or determine what information is relevant.


Part 2: A Proposed Solution — Sovereign AI as a Guiding Vision

What Does "Sovereign AI" Mean?

When I first heard "Sovereign AI," I was sceptical—taking it to mean countries (governments especially) should build their own AI models, ending with some kind of ChatGPT developed by government with uniquely "New Zealand" characteristics. I still think this idea has enormous hurdles. But I asked myself what a viable and realistic vision of Sovereign AI for New Zealand might look like.

Different people mean different things when they talk about Sovereign AI. What matters for my purposes is this:

Not just nations. AI sovereignty doesn't have to be about nation-state level action. It can also be about actions by communities, multi-sector groups, or individuals.

Not just governments. It doesn't have to be exclusively about government and public service activity. Any sovereign AI approach would have to include a range of sectors, and many "Sovereign AI" models internationally are created by companies or public/private partnerships.

Not just foundation models. Sovereign AI doesn't have to focus solely on training entirely new foundation models from scratch on New Zealand data, hosted and run on New Zealand computers.

Not all-or-nothing. We can increase AI sovereignty without being absolute purists—we don't need to get into local GPU chip production, and any approach wouldn't be irreparably polluted by including some data or components from outside New Zealand.

Factors Relevant to AI Sovereignty

When people call for sovereign AI, they're usually thinking about some combination of:

Hosting, jurisdiction, and location. Where are AI systems geographically located? Which countries have jurisdiction and legal authority over them and the people who operate them?

Independence, revocation, and interruption. In what circumstances can users have their access interrupted without consent? What mechanisms—technical, legal, physical—can disrupt access? This includes geopolitical, cybersecurity, and national security considerations.

Cost and equity of access. Theoretical access is meaningless without financial, computational, and educational means to use systems correctly. This applies nationally—describing how a model might be trained in New Zealand is meaningless if financial or computational constraints are prohibitive—and distributively, considering whether people and communities with limited resources can gain reasonable access.

Privacy, disclosure, and information security. Understanding how information is used by AI systems is essential for autonomy and agency. This isn't just about individual privacy but also about use of information by public or private agencies training or using AI systems, and about information that may be sensitive even if not about identifiable individuals—national security information, trade secrets, or anonymised data used for profiling.

Governance and oversight. What systems build confidence in how AI systems are designed, developed, deployed, and reviewed? Who controls those processes, and how can they be independently analysed and audited?

Design, development, and intrinsic properties. How far does the model intrinsically reflect values and principles unique to or aligned with New Zealand? Can it understand a New Zealand accent or local vocabulary?

Customisability. Can an AI system be customised to reflect local values and principles? If it performs in ways we don't like, can we change it?

National sovereignty. How far does the AI system reflect considerations related to national sovereignty for New Zealand and the Crown, or for hapū and iwi under te Tiriti o Waitangi? Cyber-resilience, national security, and supply chain considerations are relevant here.

Upstream and downstream determinants. AI development and deployment depends on upstream factors (energy, data, compute) and creates downstream impacts (environmental, social, economic). Impact on these determinants affects social, political, and commercial tolerance of AI systems.

A Practical Approach to Sovereign AI

Opening up the discussion this way allows quite different approaches that are much more achievable. A practical approach to Sovereign AI for New Zealand could emphasise three key categories:

AI Literacy. Empowering people, organisations, businesses, and communities to have greater agency over AI systems. This includes AI literacy and measures to enhance equity and equality of access. Use whatever systems you want, as long as you're equipped to make informed decisions. This flows through to greater digital literacy and skills on privacy, cybersecurity, and data protection.

Digital Infrastructure. Measures to promote access to computer equipment and software that let people use AI in ways meeting their requirements, informed by the knowledge and skill developed above. Competent people choosing their systems and how they use them, including where and under what conditions.

Fine-tuning. Before investing $150 million in computer equipment and opening the vaults of our shared digital heritage for extraction, check how far we can get what we need by customising existing models. Has anyone tried this properly? What are the limits? What do we need to test it? Let's find out what fine-tuning can achieve and share findings.

If we adopt this way of thinking, many of the tricky decisions about who must do what, when, and in what order will probably emerge naturally. Then, if the case is strong, someone might like to train a new foundation model.

How Does This Help with AI Regulation?

To pursue Sovereign AI is to adopt a regulatory approach. It sets a direction signalling that things aligning with this direction will be encouraged, and things that don't will be discouraged. This helps people understand what new regulation might be required, which changes will be prioritised, which initiatives could be proposed or funded, and how the public service will interpret and apply existing regulation.

If we can agree on a vision for Sovereign AI (the Policy Problem), we can begin to collate and manage relevant information (the Information Problem), set a structured agenda for coordination (the Coordination Problem), and fund initiatives with confidence they'll be useful components of the wider whole (the Economic Problem).


Part 3: Specific Initiatives

The following initiatives are designed to address the four problems systematically.

Existing Initiatives Already Underway

NZ AI Policy Tracker (addresses Information and Coordination Problems). A collation of policy and regulatory materials on AI in New Zealand—documents, guidance, standards, and entities working on AI policy. The tracker serves as a guide for developers, consultants, policy-makers, researchers, and others, and forms the foundation for a knowledge base accessible through retrieval-augmented generation systems with pinpoint paragraph citations. Version 2.0 is available at docref.org.

Sovereign AI Discussion Paper (addresses Coordination and Policy Problems). A paper drafted "in the open" on what sovereignty looks like for New Zealand regarding AI, published using DocRef. The paper draws out different meanings of Sovereign AI, prompts discussion and collaboration, and is updated regularly to reflect ongoing discussions.

Brainbox Institute (addresses Coordination and Economic Problems). A think tank transitioning to a not-for-profit holding structure to confirm its public interest orientation and build collaborative infrastructure for technology policy work. Currently hosting projects including one on Internet infrastructure and climate resilience.

Proposed Initiatives

Machine-Readable Regulation Repository (Information Problem). Convert instruments identified in the AI Policy Tracker into machine-readable datasets for re-use by AI systems and the people who design, develop, and deploy them. When complete, anyone in New Zealand could download all AI regulation in bulk in multiple structured formats for compliance systems, research, or analysis using AI. The repository would be updated over time using DocRef's change log and version comparison system.

Several instruments have already been converted (Copyright Act, AI Strategy, Responsible AI Guidance for Business, Privacy Act, Consumer Guarantees Act, Harmful Digital Communications Act), along with international instruments (US AI Action Plan, EU AI Act, GDPR, DSA, UN Global Digital Compact). Conversions underway include Gen AI Guidance for the Public Service from GCDO. Remaining instruments from the Policy Tracker would follow a scoping and triage process.

AI Literacy Programme (Coordination and Policy Problems). Coordinate activity to equip New Zealanders with meaningful education on how AI and digital systems work, how to verify whether they're working as intended, and how to use them effectively. The EU AI Act provides one example of incorporating AI literacy into a regulatory system. Within AI literacy, I'd include efforts to build capacity in AI governance and assurance—if governance skills are high, people using AI will know how to identify when systems aren't working correctly or producing unintended impacts.

Entity Map as Reusable Dataset (Information and Coordination Problems). Draw on the AI Policy Tracker and Machine-Readable Regulation Repository to create a list of entities that can be reused for mapping and data analysis purposes.

Digital Infrastructure Study (Policy and Economic Problems). Examine availability and suitability of digital infrastructure in New Zealand for enabling access to AI systems within a sovereign AI framing. Particular focus on cost and distribution of access and general levels of competence in utilising such infrastructure from an AI literacy perspective.

Demand Study for Sovereign AI (Economic and Policy Problems). Investigate whether people really want Sovereign AI, what they'd be willing to contribute for access to it, and who might pay. This needn't be purely commercial—if Sovereign AI is treated as public infrastructure, other valuation methods may be appropriate. Geopolitical dimensions of AI cannot be ignored in the current environment.

New Foundation Model Roadmap (Policy Problem). If we decide a new foundation model is required to achieve sovereign AI, what exactly is involved? Australia, the EU, the UK, Canada, and Spain have embarked on this process. A scoping exercise would examine requirements, costs, and feasibility before any commitment.

Register of Use Cases and Harms (Information and Coordination Problems). Collect specific uses of AI frequently raised as requiring regulation. The purpose is to collate regulatory systems that might already address specific harms and assess whether those systems are adequate.

Draft Regulatory Code (Coordination and Policy Problems). Modify existing frameworks like ANZCPOSH and other AI regulatory systems to devise a credible and independent regulatory code.

Note on Duplication

Many initiatives may already be underway to address similar issues. Cataloguing them all is difficult precisely because of the Information, Coordination, Economic, and Policy problems. This work should surface initiatives rather than compete with them, while also enabling duplication or integration where that's more efficient or enables more systematic approaches.


Part 4: A Framework for Collaboration

Why a Multi-Stakeholder Approach?

When it comes to technology governance and regulation, there is a powerful and necessary role for an all-of-society approach. Even aside from principled reasons, there are fundamental pragmatic reasons why reciprocal learning and sharing of perspectives is essential.

Government cannot be everywhere at once. People working in government aren't exposed to the same things as people in business or community. Relevant things get overlooked.

Business and industry may not understand or value some of what public servants deal with and the ways they have to work. Businesses also have reasonable limits on how far they can ingest or respond to insights from the wider community.

People in academia or the community bring important perspectives but may never have experienced what it's like to operationalise a regulatory system or build large-scale high-risk business or compliance systems—work requiring years of effort, substantial investment, accountability, and navigating trade-offs between equally important factors.

This is true of almost all public policy, but especially true for technology regulation because of how AI, the Internet, and communications systems are so powerful and pervasive.

Draft Principles for Coordination

Any digital regulatory infrastructure should be developed and deployed so it could be given formal regulatory status by legislative or executive bodies at any time, and so that companies and others feel confident designing and deploying systems against it.

This work should surface initiatives rather than compete with them, but it can duplicate them or invite integration where that is more efficient or makes more systematic approaches possible or enables funds to be deployed more effectively.

Data re-usability is essential. Some authentication or gate in front of data may be appropriate—for example, where information concerns individuals.

Institutional arrangements should be designed around the economic problem, specifically the questions potential funders will ask: What specifically will be done? Why would it help? Are you sure? Wouldn't alternatives help more? Who's going to pay? Why me? Why haven't others already paid? What's in it for me? What are the downsides? Who's accountable if it doesn't deliver?


Part 5: What Next?

An Invitation

This document is published for consultation and discussion purposes with a view to building interest in specific initiatives, seeking feedback, inviting collaboration, and building confidence in making financial or non-financial contributions.

Do you disagree? Please say so. There will be aspects of this topic I've under-emphasised, over-emphasised, or missed entirely.

Questions to consider:

What's missing or unsuitable from the sovereignty factors outlined? From your own perspective, which are most important?

How far do the project breakdowns reflect what you'd like to see? Is anything missing? Which initiatives would you most like to see happen, and how might you be willing to contribute?

Who do you know who could play a role in driving different initiatives? Who might be interested in collaborating? What framework would need to be in place to enable collaborative multi-stakeholder activity?

What projects are already underway in each category? How can we map what's going on already?

Let's Do Something

This discussion can be progressed most effectively through talking about specific projects. If you have a project in mind that might advance New Zealand's AI sovereignty or address the problems outlined here, let's talk about it.

Contact: Tom Barraclough — via email, through the Brainbox Institute website (brainbox.institute/contact), on LinkedIn, or using the NZ Sovereign AI NextCloud platform (sovereignai.catalystcloud.nz).


Appendix: Key Resources

NZ AI Policy Tracker: docref.org/brainbox/nz-ai-policy-tracker/2025/en/

Sovereign AI Discussion Paper: docref.org/brainbox/sovereign-ai-nz/3/en/

Brainbox Institute: brainbox.institute

EU AI Act Reference: artificialintelligenceact.eu

OECD AI Principles: legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449


This document consolidates a series of blog posts originally published at brainbox.institute in September–October 2025, together with related discussion papers and proposed initiatives. Published under CC BY-SA (flexible licensing with attribution).