Roundups with Ros - Thinking about AI
Community Networks • May 9, 2025
Ros Rice, Executive Officer Journal Entry May 2025
Everyone is talking about AI and here at CNA I confess we have used it several times to clarify issues, or to understand complex legislation. It seems like time to consider other perspectives than the prevalent one of AI being the answer for everything.
Recently Patrick who works with me attended a meeting that looked at other perspectives and I would like to discuss some of them.
I am now quoting from the booklet handed out at this meeting. “The use of AI should be aimed at enhancing public and community services and improving the quality of jobs, not aimed at job displacement.”
This perhaps is the most common worry today. We have realised over the last few months, that the creators of AI might not be thinking about the human effect of AI. For many of them it seems like the financial reward will be stronger than the collective good and wellbeing. So how do we manage the ongoing path ensuring there is only safe and responsible use of AI. I believe in the end, it is entirely up to us.
Firstly, why do we always need the latest device or new technology? Some of these big changes don’t work well for us. They often use a lot of power and are data hungry. Some are more complex than we actually need. Maybe we need to scale down. Some large technology also has “environmental, social and governance costs”.
It would be good to have “Bespoke, risk-based AI regulation that focuses on protecting those most vulnerable to harm, including specific groups of people. Our ecosystem should be appropriately robust to ensure that AI in its present and future forms remains a net benefit to the lives of New Zealanders.”
One of the strong recommendation is that AI is never used without human oversight. AI is known to have what are called hallucinations. AI operates mainly on prompts from a human. It then draws only on what has been previously been fed into AI. Should the prompts be unclear, or the information fed to the AI is supposition or untrue, the eventual response of the AI cannot be correct. There are many examples of AI answers that are plainly incorrect or ridiculous. This is where the person sitting at the computer must with knowledge and competence always review the AI response.
Likewise, what if the AI is fed deliberately false information and asked deliberately skewed questions. Where are the checks and balances to ensure that the response is not something that deliberately does not benefit employees or people affected by that response? Another case of legislative control of use of AI responses.
There are issues of privacy if AI is able to access data and release results that share private information. AI can have access to much information that is not normally publicly available. What happens if this creates AI public responses that share this information. Surely now is a time that the privacy concerns of using AI are covered in the privacy Act.
One of the issues in the PSA booklet is Facial Recognition Technology is Not Good for Māori. The question needs to be asked “who benefits, and who bears the risk?” When we discuss discriminatory profiling and surveillance it well can be “indigenous communities who suffer the most when biased technologies intersect with systemic racism”.
This is very possible in broader social and political contexts, just look at what is happening in the USA today. “The harms of Facial Recognition Technology are not abstract theories.”
I cannot recommend the booklet A.I for Good strongly enough. You can access this via the PSA– Te Pūkenga Here Tikanga Mahi, view the digital copy here.
There are many more issues we as a society need to discuss on this issue. These are a few, but most importantly, we as humans in this space need to oversee what is happening, what AI is producing in our own work, and what unintended consequences we need to be aware of. We need to step up to this sooner rather than later, AI is already here and ubiquitous.