Press conference at Australian Parliament House

Subject
Regulation of artificial intelligence technology; National Reconstruction Fund.
E&OE

ED HUSIC, MINISTER FOR INDUSTRY AND SCIENCE: We certainly want as a government Australia to be the best at developing and using AI. That means using AI safely and responsibly. Clearly that new wave of development in AI, people do want to know that we’ve got the kerbs in place to deal with this, but also to clearly maximise the benefit. 

So this has been something that the Albanese government has been watching, obviously taking note of community concerns, but also seeing what’s happening internationally and ensuring that we have a legal framework that’s fit for purpose. But ultimately, what we want, are modern laws for modern technology, and that’s what we have been working on. 

So earlier this year we commissioned the Prime Minister’s National Science and Technology Council to look at the whole development of generative AI and large language models and to give advice to government on that. And they’ve been working away on that. Then we also wanted to see what we could do in terms of modernisation of our legal frameworks to account for this, bearing in mind there’s probably a dozen different laws that currently exist that take into account the impact of AI and have some sort of response. But, clearly, recent developments require us to think further. 

Apart from releasing today the National Science and Technology Council’s paper, the rapid response report on generative AI, we also released the Safe and Responsible AI in Australia discussion paper. This is opening up a public discussion about what we expect to see in terms of the way that our laws are shaped up, the way we have modern laws for modern tech. 

And I want to assure you that we’re not obviously starting from scratch – as I mentioned momentarily, Australia already has strong laws and guardrails in place, but what we’re asking in this discussion paper is, is there enough. And we want the experts and community to be able to step forward with their views and to provide contributions on that over an eight-week process where we will be accepting submissions. And after that point in time we’ll obviously take those on board. 

I also want to add that this is not the only good work that the Albanese government is working on in terms of responding to AI. You’ll see over the coming weeks various portfolios putting out work that they are doing to take into account that AI has different impacts in different portfolios. And you’ll see that start to emerge. We’re trying to join up all that work to make sure that we have got a great legal framework to deal with this. 

I also think it’s important to emphasise this: Australia is recognised, and what we should develop is a framework that levers off our values. We’re recognised as a great liberal democracy. We’re a trusted partner. We’ve been thinking about this for many years. And I want us to be able to set up a legal framework or a model of regulations that can be used by other countries as well. 

We should be world leaders in this space, taking into account what’s happening overseas but levering off our own background and our own history in this arena. 

It's a responsibility of government to get this right, to recognise these concerns, respond accordingly and, in the process, once we do manage those risks, also get the most out of this. It’s been estimated it could provide between 1 and 4 trillion dollars in terms of economic value. But we’ll get there by being able to have broader community assurance, confidence in the way the technology is being used and applied. 

JOURNALIST: The discussion paper asks stakeholders to consider if we may need to ban high-risk applications of AI. In your opinion, what is the worst of the worst and, therefore, the top of the list of candidates of uses of AI that may need to be banned? 

ED HUSIC: So I want the process to obviously run its course. We are inviting people to identify those areas, those risks, that exist in terms of the way the technology is used. And then obviously that will guide our response. I don’t want to pre-empt that consultation process. 

But I just want to be clear: that consultation process does bring up high-risk areas that do need a regulatory response. That is something that definitely will be considered by government. We want people to be confident that technology is working for us, not the other way around. That it maximises the benefit and that we curtail risk as much as we possibly can. It’s not just in use but in development too. People are thinking when they’re developing AI applications or technology that they’re thinking those things early on. 

JOURNALIST: So does facial recognition fall into that high-risk category? 

ED HUSIC: It depends on, if I can say, the way in which it’s used. I mean, facial recognition is obviously being broadly used in different ways. But if it does through the consultation appear that there are new ways in which that facial recognition technology is being developed and utilised in ways that are outside what the community thinks is acceptable, then clearly we will be taking a very deep look at that. 

JOURNALIST: Given the harms we’ve seen from Robodebt I was just wondering if automated decision-making in welfare assessment would be one of those high-risk technologies, or is that something that you’re letting the other portfolios consider? 

ED HUSIC: I think the good thing about what you’ve raised there is an example of what we are thinking about when it comes to the application of technology, that what we are trying to do is work with business, academia and others to ensure the use of technology works for us and not the other way around, and that people think deeply about potential impacts in the design phase. 

Robodebt was a terrible, terrible way in which automated decision-making operated. It did not necessarily take into account the human impact. It didn’t allow for challenging of decisions. It didn’t allow for an easy way for people to raise their concerns. And people in the community underwent horrific stress as a result of that system. 

We absolutely have to learn from instances like that and to demonstrate we’ve learned from that in the better design of those automated decision-making processes. So, again, I think that will be something that gets considered through the consultation process. But it is a reminder it’s not always just about technology; it’s about all the processes that get set up around it. 

JOURNALIST: I don’t want to be overly reductive about this but I’m trying to get my head around this, are we talking about an enormous opportunity here with some small risks that need to be managed or enormous risks bit with some great opportunity with it? 

ED HUSIC: I haven’t necessarily worked out the ratio. I’m being quite frank with you. But I do think when it comes to risk people will focus on that until it’s attended to, until it’s dealt with in a way that people are happy. And, you know, I’ll consider the benefits at some other point. But there is huge benefits. 

The way in which AI can be applied, particularly in terms of other emerging fields – quantum computing as well – can solve some of the hardest problems that have eluded us, the answers have eluded us for many years. We saw through the pandemic the way in which AI was used to help fast track the development of a vaccine that was crucial in getting us out of lockdowns and saving lives. So AI in the health space or in education as well can play a really useful role. 

But also some of those things that are mentioned are initial question around, for example, use of deep fakes that could undermine confidence in decision-making or creates misinformation in the broader public, that’s going to be a risk that we have to deal with. And that’s what we want this process to be able to put a spotlight on, those risks. So, again, we can get to the point where there’s a great degree of confidence and trust in the way technology is used to be able to deliver benefit for the community. 

JOURNALIST: Can I ask: you’ve identified – excuse me – community acceptance of AI is a matter of trust, you talked about Robodebt a little bit. Do we have the expertise within the public service to adequately deal with the application of AI on a broad scale across government services? And then in relation to large multinationals who routinely use AI to deliver services, they operate in a black box. How do we regulate the algorithm when you can’t see the algorithm? 

ED HUSIC: So, two things: Minister Gallagher as Finance Minister and Minister for the Public Sector is looking at that area of AI capability within the public sector. I do think it’s something that we do require where we need to build our skills in. And I think that’s something broader as well, not just in government but in business too. It’s part of the reason why we funded some support for small and medium enterprises to be able to better embrace AI but do it in a way that’s safe for them and has a positive outcome. So I think building awareness is a big challenge. 

The Productivity Commission has also, James, identified this awareness and capability within business to be able to make the right decisions around new technologies as a thing that’s got to be tackled. So we do need to do that. And we probably need to build a lot more awareness and understanding of it within parliamentary circles too, frankly. The more we get our heads around this technology, the better it is in terms of the way decisions get made. So that is going to be really important. 

That hole you opened up and I don’t want to go down a rabbit hole, if you don’t mind me putting it that way on the black box and being able to open up the technology, as you would  appreciate, the complexity now that exists in the operation of modern systems, it’s not as easy as just being able to open that black box up. 

But what it will require – and this is why we’re trying to embark on this process – is to get people thinking about the design of their AI and their technology and then obviously their use so that they can explain what is going on. 

And in some parts of the world that type of thinking about the design of AI and being able to work with regulators on that in terms of explaining about how it’s being stood up and how it will be used, that will be really important. Again, we hope that through the consultation process that gets dealt with. 

JOURNALIST: You said we already have strong laws regulating AI indirectly. But the Privacy Act is decades out of date. How important is it that the Attorney-General department’s proposal to give Australians greater control over their data or to sue for serious breaches of privacy, how important are those reforms to give people confidence that, you know, AI isn’t going to run amok with all that information that people put out there in the public sphere? 

ED HUSIC: Well, I think you’d have to recognise that the Attorney-General has started that process of modernising the Privacy Act and taking into account the way that the public expects that their data is used, and we are going through that process. So there is work that’s obviously being done there. Everyone will say we should be doing this faster. 

If you take into consideration what I’m putting forward to you today, a lot of the thinking around AI, particularly around the changing views around the technology, have pretty much been triggered through ChatGPT that was released back in November last year. Here we are six months later as a national government talking about modernising our laws. 

I would like to put to you all that it would be very hard for you to find examples of modern governments in the Australian context that have moved that quickly to be able to refresh our legal framework to take into account what’s happening in technology. And so across different portfolios people are working on that. 

I also point out that Minister Rowland is looking into the issue of online safety and the way that AI is used in that arena to deal with misinformation, for instance. So we’ve got people as a new government turning their minds to this, mobilising responses. And, as I said earlier, we want modern laws to deal with modern tech, and we’re focused on being able to deliver just that. 

JOURNALIST: I’ve got two questions: will this consultation go into whether ChatGPT should be used widely at universities, schools, will it look at that issue? And then separately, there’s been a lot of big claims over the past couple of weeks and months from experts in this field claiming that AI could take over tens of millions of jobs and that it should be considered – the risk should be considered in the same class as pandemics and wars. What do you make of those kinds of claims? Is the government looking at AI through that lens? 

ED HUSIC: Okay, let me just start with the education side. My friend and colleague Jason Clare has referred to the House of Reps Employment and Education Committee an inquiry to look at the use of generative AI in the education sphere. And state and territory governments are starting to look at it as well. 

I always with technology – and some of you may be familiar – I’ve been very sort of engaged and interested in this space through the entirety of my parliamentary career and well before it as well. And it’s always important to see the light and the shade. So clearly, you know, ChatGPT and generative AI can be used particularly from a research perspective, but we also want to ensure that younger students are developing their skills in a way that they’re not necessarily being influenced in an undue way through things like generative AI. So getting that balance right is really important. 

Being able to think through too on – you’ll see some upcoming work that we’ll probably announce in the area of AI automation and work. I can’t tell you, but it’s been something that I’ve been focused on. In fact, as an opposition we were the first to actually create a portfolio around the future of work, thinking about the impacts of technology. That triggered a Senate inquiry a couple of years ago to start people thinking about the impact of technology on employment. 

As much as the use of technology creates new roles, as much as it disrupts existing ones, we’re also very conscious that if we don’t manage it right it can lead to greater levels of inequality, and we have to be, again, seeing the light and the shade when the technology is being applied. 

It’s also important I think when it comes to this thing I’m not an evangelist on technology. I don’t go and catastrophise. I think what we need to do is have a clear, sensible way to consider, what’s good about technology and what doesn’t work the way we want and how do we deal with risk. And that’s the type of process we’re trying to encourage through this consultation and the work that will happen in other portfolios as well. 

I think the worst situation is if governments haven’t recognised risk and haven’t started to respond to that risk, then the sense that the technology is getting away from us becomes even bigger. But I think, too, if you’ve got people in the sector that are saying within technology, “Well, we’ve probably got room to regulate on this,” it tells you that we do need to move as one on making sure that we’ve got a good regulatory framework to deal with the way in which technology is developing.  

JOURNALIST: I might ask a question about the National Reconstruction Fund, if that’s all right. So during estimates Treasury officials revealed that the rate of return might be bond rate plus 2 per cent. Could you confirm that? And also could you provide some thoughts on why that particular rate of return is? 

ED HUSIC: What I can confirm that is we are taking into account advice around the rate of return. And ultimately that will be contained within the investment mandate that will be recommended to us by the board. So we’ve got a bit more to go before we get to that point. So what we do want is we want the reconstruction fund, as we’ve said, to revitalise Australian manufacturing and build new capability. We want that growth capital to be made available to Australian companies and we want them to obviously be able to deliver a rate of return to government and the taxpayer but do it in a way that we can fully access that capital at very competitive rates to help those firms grow and also, importantly, create new jobs. And if we get this right it will be a huge bonus to the economy longer term. 

JOURNALIST: So what’s that? Treasury officials are wrong? 

ED HUSIC: No, I’m not saying anyone’s wrong or right or whatever. I’m just saying that we’ve got a bit more for this to play out. And we’ll obviously announce the final rate of return when we release the investment mandate, which I’m very committed to, publicly. Thanks. 

ENDS