Interview with Kirsten Aiken, ABC's The Business

Interviewer
Kirsten Aiken
Subject
AI regulation, OpenAI, ChatGPT
E&OE

KIRSTEN AIKEN: The creator of ChatGPT has visited Australia ever so briefly to talk about the possibilities artificial intelligence offers for business and the need for regulation by the Federal government. OpenAI CEO Sam Altman has been meeting legislators around the world since warning that mitigating the risk of extinction from AI needs to be a global priority. He met Industry and Science Minister Ed Husic during his visit. The Minister joined me earlier. 

Welcome to The Business. What was Sam Altman's pitch to you about why AI should be more tightly regulated in Australia?

ED HUSIC, MINISTER FOR INDUSTRY AND SCIENCE: I think he recognises, as we've seen here in Australia, that while people appreciate that the technology does a lot of great stuff, there is a concern about it getting ahead of itself. People wonder about the automated nature of decision-making and also the quality of the work that's being pumped out by a generative AI like ChatGPT. 

So, he certainly understands that. He said, remarkably, he wouldn't trust ChatGPT all the time and so he understands quality is an issue, but I don't think that's something you can bank on forever and a day, because the nature of this technology is it improves continuously. And so, he's sort of working, or said he's prepared to work with governments that are thinking about the shape of regulatory frameworks. 

And he's indicated to us too, and positively, I asked him that I was very keen for our scientists and researchers to be able to engage with OpenAI, his company, to better understand the models, particularly before the release of new generations of ChatGPT, for example. And he's agreed to that.

KIRSTEN AIKEN: Can I ask you, how do you weigh Sam Altman's self interest in OpenAI with his desire to restrict how it's used?

ED HUSIC: Yeah, look, I think that's a very valid point. It's an important question that has to be borne in mind. We take on board the suggestion and we take on board viewpoints that will be put to us by industry and various representatives, including Sam Altman and others. But we have to make a call ultimately as a government about how we shape up our regulatory frameworks. 

We want modern laws for modern technology. We've got to get the balance right between the needs of industry and the expectations of community, and that's what we're planning to do and that's why we've opened up our consultation. OpenAI is welcome and they've said they will make a contribution. Business is welcome and the community, academics and researchers are open to do the same as well and that's what we're encouraging people to do.

KIRSTEN AIKEN: I know you're alive to the potential the technology could have in a number of areas, but a report on generative AI, led by Australia's chief scientist has raised concerns about its potential impact here. How concerned are you about the potential for disinformation and a loss of trust in democratic systems?

ED HUSIC: I think that's a big threat, Kirsten, it's definitely that issue. People want to have confidence that the material that they're seeing reflects reality. It's why, for example, people have put forward the notion of labelling on, for instance, some of the graphic material that you see, or if written material is produced substantially using generative AI like ChatGPT, knowing that's been produced by, that's really important. 

That's one thing that we're exploring. But from the disinformation perspective, seeing images on your TV that might provoke a reaction in the public, that demands a public response from governments or others, that is a really serious issue and it's something that we do need to think deeply about how we respond to.

KIRSTEN AIKEN: Briefly, what did you make of the sell-off in US stocks last month after a fake image believed to be AI generated was published on Twitter showing an explosion near the Pentagon?

ED HUSIC: Yeah, I think that's another example of the impact of disinformation that translates in an economical or commercial way and it just highlights and we've seen just recently, I think, the CBA, the head of the CBA, was caught in a deep fake or disinformation exercise, which they're very concerned about. 

So, it is something that needs to be tackled. But again, I just want to emphasise, if you don't mind, this point in this whole approach on AI. What I've tried to do is not be an evangelist and not catastrophise. We've got to recognise the risk, but we've also got to understand too and appreciate, and I think the broad public does too, AI has opened up options for us in terms of the way we generate, use, analyse data and the accuracy of that work in a way that previous generations could never have imagined. 

Really important in some of the use cases I've used, for example, previously, references to the way it's helped us develop vaccines in the course of the last pandemic, that in times past we would never have thought we could develop a vaccine that quickly. And AI played a really important role in that, and I think the public's got a really mature view on this. Yes, we get the upside, but just make sure you're looking after us on the downside and that's what we've got to do.

KIRSTEN AIKEN: You've mentioned the consultation periods won't go on and on. Just how quickly are you looking to bring regulation in?

ED HUSIC: The consultation that we did open up was for eight weeks. It'll end late July. We'd obviously hope as quickly as we can to shape up a response, but I want to make sure that I'm not just giving you an artificial or arbitrary time frame. We want to be able to get this right, given how complex the subject matter is.

KIRSTEN AIKEN: Ed Husic. Thank you.

ED HUSIC: Thank you.

ENDS