Press conference on AI guardrails policy

Subject
Press conference discusses the government's AI guardrails policy; 'High-risk' AI; Australia's concerns with AI.
E&OE

ED HUSIC, MINISTER FOR INDUSTRY AND SCIENCE: Morning, everyone. Thanks for joining us. Let me start with an unremarkable observation I hope is newsworthy, but this is probably one of the most complex policy challenges facing the government world over, and the Australian government is determined that we put in place the measures that provide for the safe and responsible use of artificial intelligence in this country. It’s a really important day. We’re responding to the challenges that artificial intelligence poses. 

I do recall at Bletchley Park in the UK last year at the AI Safety Summit it was, in many respects, unprecedented because for a long time there was a pretty permissive view about technology and the fact that it could be developed without government intervention, but those days are well and truly over. Governments understand we need to work with business to find ways to make artificial intelligence safer. 

At home, here in this country, we conducted extensive consultations to understand what the community expects. We brought together an expert group to help guide thinking around what the risks are and what are the best ways to respond. I did indicate at that time after those processes we’d work within the international community where we could and we would regulate locally where we had to. And we also heard loud and clear the message from the broader Australian public that while AI has huge benefits, the public wants to be protected if things go off the rails. So today you’re seeing our response. And that’s in two parts. 

First, we’re proposing guardrails for the safe and responsible use of AI in high-risk situations. And we’re proposing that those be mandatory. They’re going to require organisations developing and deploying artificial intelligence models to properly assess those risks. And they’ve also got to plan how they’ll manage those risks – test their AI systems to make sure that they’re safe, share all those processes transparently, make it clear when AI is being used especially when it looks human like, make sure that a person can take over from AI at any moment and that people are held accountable for safety issues that might arise. 

Now, this is going to take some time to implement, and we’ll consult over the next four weeks about these proposed guardrails, and after that we’re going to decide on the best legislative approach to take. That could include updating current legislation, bringing framework legislation, or bringing in an Australian AI act. 

But businesses, importantly, don’t need to wait for this work to occur, because the second part of what we’re releasing today is a voluntary standard that can take immediate effect, and it’s been developed in tandem with businesses and other organisations. With the standard right now businesses can get cracking right away on safely and responsibly using AI. This gives them the time to prepare for the mandatory guardrails, and it will give Australians peace of mind that protections are being put in place. Because we know overwhelmingly Australian business wants to do the right thing. 

We’re releasing a great bit of work, I might add, by the National AI Centre, called the Responsible AI Index. It shows around 80 per cent of businesses believe they’re doing the right thing on AI, but less than a third actually following best practice. What the Australian government wants to do is create that bridge between best intention and best practice. So a lot of what we’re releasing today is designed to do just that. The voluntary code gives practical ways for businesses to achieve what they want to achieve – and that is, the safe and responsible use of AI. 

So let me just make these points: if you’re already using AI in your business and it could be high risk, this means you. If you’re supplying AI products to other businesses, this means you. If you’re thinking about using or supplying AI in your business, this means you. Really important part of what we’re releasing today is a guide on what’s high-risk AI, what does it look like. Simple example is when it’s being used to hire people. AI has been shown to make bad decisions that discriminate people on the basis of their race, gender or age. Put simply, we can’t allow AI to undermine basic rights like that. 

Finally, I just want to extend my deep gratitude and thanks to the expert group for some outstanding work on what we’re releasing today. And as I said at the outset, very complex issue. We chose people who could help us out from a wide range of perspectives and backgrounds, from ethics to technical capability to industry experience. And those perspectives have all been reflected in what we're releasing today. And our proposal is stronger thanks to their work and their views. And I also just want to thank Stela Solar who heads up the National AI Centre and centre brought into the department from the CSIRO, and her leadership has been first class. 

So, as we move together, we move forward in bringing these protections, we will be relying on all our expertise. And I look forward to what we’re able to achieve. And with that, I’m happy to answer any questions. 

JOURNALIST: Minister –

ED HUSIC: So, one, two, three. Sorry, okay, your voice rang out, so let’s start with yours, shall we? 

JOURNALIST: When would you like to see these mandatory guardrails in place in an ideal world, and will they apply to overseas companies operating in Australia or is it just Australian companies? 

ED HUSIC: I guess a number of things. First, we are obviously keen over the next four weeks to take views on board about the shape and implementation of the guardrails. So we’ll have more to say about that. I just want to make the point again: voluntary standards provide a set of actions that businesses can take to help them identify risks and how to respond to them in a practical sense and get moving on that right now. 

And I think it’s important not just for their employees, their customers and the broader public, it’s also important for those businesses to get it right themselves. We know that AI, particularly generative AI, for example, can yield some massive economic benefits for the country. Some work has been done by the Tech Council of Australia that says up to 115 billion in economic value could be generated between now and 2030. So getting it right means building trust, increasing the use of the technology, and the voluntary standard can help well and truly get us on the way there. 

JOURNALIST: Minister, I just want to ask about high risk. Would anything involving children be high risk? And would there be any limits on children’s or teenagers’ ability to use AI, like, for example, ChatGPT and writing essays or the way AI might be applied to, you know, child cohorts and people assessing data? 

ED HUSIC: It’s important to point out in this conversation that there’s a lot of AI right now that’s been used that doesn’t necessarily pose a risk and is providing great benefit. I’ve seen and know the work of firms like, for example, Harrison AI, a great SME here in this country, that’s working in radiology to fast track the identification and treatment of cancer. So, there’s a lot of AI that’s doing well. Where there are higher risks – and we outline it in the work that we’re releasing today – for example, we do think that in terms of generative AI, more likely to be high risk and we do need to have businesses recognise that. 

To the question you’ve asked me around, for example, where AI interacts with children, we think that is a high-risk area that we do need to have some protections on and businesses are upfront about the fact that it will have that intersection and what are they doing to protect kids and to give peace of mind to parents. 

I also indicated when we kicked off this work last year that there are probably about a dozen different laws in different spaces in which they currently think about the impact of AI in those portfolios. And my friend Jason Clare as Education Minister working the states has been looking at the way in which AI is used in education, particularly to the heart of your question about, say, for example, if it’s being used in homework or tests and the like, and has been setting out those standards working with this state and territory counterparts. 

Look, there are some uses in terms of ChatGPT in businesses, and some of them, you know, are being used right now, even in the media. Is it high quality and perfect? No, it’s not. And even some of the biggest companies in the world recognise that there are quality issues with their AI. It’s always going to involve us being able to run an eye over that work to make sure that it’s fit for purpose. That philosophy or mentality of a human being involved is embedded in some of the guardrails we’ve put forward. So where, for example, AI might not be working in the way that was intended, you’ve got to have a person be able to intervene. 

If, for example, AI is making a decision that impacts on someone in a bad way, you’ve got to be able to have a mechanism to appeal that decision or get that decision reviewed. And we want through those type of guardrails to build the confidence in the way that the technology is working. We want the technology to work for us, not the other way around. 

JOURNALIST: So just quickly, no age limits being considered at this stage? 

ED HUSIC: No, I think when it comes to kids – and I think that’s kids of all ages – where AI might have an impact on them, we want the guardrails to be a guide as to how businesses should responsibly deploy that AI. I think that is what the Australian public wants to see as well. 

JOURNALIST: Minister, just on point 6 of the voluntary mandatory code, which is around the sort of labelling of AI-generated content, would you consider making that mandatory across both, so not just the high-risk applications? 

ED HUSIC: Across what, sorry? 

JOURNALIST: Making that mandatory across the board rather than, you know, only for high-risk applications? 

ED HUSIC: I think there’s probably value in businesses if they have used AI to label it. I think that that is going to be a tricky – if I can be completely direct with you, that’s going to be a tricky thing. Because if it’s in a low-risk area, businesses will say, “Well, why do I have to label everything as AI?” But if it’s in a high-risk area, that’s probably something where governments should – and, in fact, not probably, we do act. And where it impacts people in a negative way, in a harmful way, we do. So, the leadership of the Attorney-General has been really important in terms of deep fakes and revenge porn, and we’ve acted straight away as a government on that. It was really important work to do. And so where we do have to act in the needs of the public, you can rest assured based on track record we’ll do just that. Charles. 

JOURNALIST: Point 5 talks about enabling human control or intervention. Can you guarantee that will always be an option moving forward? 

ED HUSIC: We’re seeing in the guardrails that needs to – like, that work needs to start happening. And I think it’s not just us; I think world over, you know, it’s described as the kill switch – that is the on-off switch. That is under active contemplation in different parts of the world. There is an understandable concern that if the AI that is being deployed is operating in a way that is not in line with what was expected, that you’ve got to have a way to intervene. And that’s what we’re talking about in terms of the guardrails. And the way that we bring in those mandatory guardrails, this is coming to the point about – you know, I was asked the question earlier about how quickly we can move in terms of the implementation of the mandatory guardrails in line with your question. These are – you know, I can mention to you that that’s where we want to head. The way that we’ve got to do it, there’s a bit of complication, complexity in that that we’ve got to work through. But we know the job is there. It’s got to get done, right? So – and that’s what we’re signalling through the work that we’re releasing today. 

JOURNALIST: Minister –

ED HUSIC: Sorry, I’ll just go to James first. 

JOURNALIST: Most of the countries that we like to compare ourselves to have focused their efforts on building capability, building AI capability here rather than focusing on the risk associated with AI. So – and they’ve done the regulations in the background as they invest heavily in their own industries. We’re not doing that here. Are there any plans to kind of boost the level of investment in capability-building so that we understand what we’re doing? 

ED HUSIC: I guess I take a different perspective on that. And I certainly encourage you to maybe take it from a different viewpoint. There are about 500, as you would well know, firms in this country that are involved in AI development, doing great work. We deliberately set up in the National Reconstruction Fund, for example, in the enabling capabilities area or critical tech space a billion‑dollar target fund to help support investment in this space. There’s a lot of private investment, as you would know, in terms of AI activity both here and abroad. There are a number of other – it builds off, importantly, basic research that’s been supported through our universities and government investments there. 

We think that through mechanisms like either the National Reconstruction Fund or the Industry Growth Program or CRCP programs, there are a range of different avenues that people can grow that. Now, there’s probably another element, if I can anticipate, to your question, if you don’t mind – I often get it put to me, why aren’t we developing our own large language model? There’s nothing stopping that from happening. But, again, as you appreciate, you know, if you look at, for example, the development of OpenAI, billions of dollars brought in particularly through private pathways on that, you know, we have to be mindful that that is a major investment that is required. We obviously want things like the National Reconstruction Fund to play a role in that, and they can make their calls on individual propositions that step forward. But, you know, there’s – I think there’s a lot of scope for us to be able to be involved. But it will also require firms to step up who’ve got those ideas, to step up with solid business cases and investment propositions to follow up on. 

JOURNALIST: Can I have one question: we’ve got people like Toby Walsh, Anton Van Den Hengel who are two global experts in AI. They both say that Australia is massively underinvesting in AI right now, and building local capability. There seems a disconnect between what you’re saying and what these global experts are saying?

ED HUSIC: Well, I appreciate and respect the role of experts in terms of what they’re saying. I think I’ve identified some of the avenues that are available for investment in this space. We did – you know, I did champion the establishment of that target fund in the NRF. I don’t think a billion dollars is anything to sneeze at. If firms have ideas, have investment propositions that can deliver return, they should, by all means, step up. If you look at the investment in AI globally, there’s a lot of money going in that I know you’d be tracking, as others do. There’s probably not a day that goes past in the – for example, in the Australian Financial Review that’s not talking about some other deal that has come up to back in AI investment. I think there’s a lot of money going around. 

JOURNALIST: Minister, Meta right now, if you clock any search on WhatsApp, Instagram, Facebook, it’s using AI built into the search functions and AI models are scanning whatever you post, whatever you comment, whatever you like. Under this code, were it to be made mandatory, are you saying that the user would be able to turn off Meta’s ability to use AI in scanning any information or in those search bars? And just, as well, Meta yesterday is saying that there should be age verification limits at the download stage rather than the account creation stage on Facebook. Can I just get your reaction to that as well, please. 

ED HUSIC: So with your question, you know there are a number of perspectives or a number of areas where work is currently happening, right? The Communications Minister is leading some work in terms of online harms and is working with those social media platforms, as is the eSafety Commissioner in terms of some of those issues that you’ve raised. So I might lead it to the Communications Minister to continue that work. 

I just want to emphasise again: AI should not be seen as universally bad, right? It has some benefits. It does have risks, too. And we’re talking about how to manage some of those high-risk areas in the work that we’re releasing today. I don’t want coming out of this people thinking that there’s – all AI is bad and, therefore, should not be used, because there are a lot of organisations that are using it and they’re using it well, they’re using it to create benefit. But where there’s high risk – the stuff that we’re outlining today – it’s designed to tackle that. 

I’m going to go Brandon and then I’ll come back to you, if you don’t mind. 

JOURNALIST: You mentioned the deep fake sexual material bill having passed through already. What are you doing to make sure the government’s approach across portfolios is coordinated, particularly given the definition of “high risk” is still up for –

ED HUSIC: Sure. Well, obviously the work that we’re setting out today is designed to ensure that, you know, that can be picked up, not just by business but by government. You also obviously have been tracking the establishment of the National AI Assurance Framework within government. And you can see the individual bodies of work, be it what I described in terms of the work of the Attorney-General plus the work he’s doing on copyright you’ve got work that’s being done to examine the impact on employment law. You’ve got work that’s being done on online harms. You’ve got work that in the health space is looking at the intersection of AI and health being kicked off by the Health Minister. So there is some work that’s happening there. And we think that the risk and guardrail identification that we’re releasing today can provide a lot more of that consistency. 

JOURNALIST: Will there be fines for those who don’t follow the mandatory guardrails? And just so I can get you on the timing, are you hoping for next year for them to be in place? Is that something you’re looking at? 

ED HUSIC: Good question in terms of the compliance and enforcement. Again, that’s work that we’ll carry out through the consultations over the next four weeks. I just want to make this point, if I may: anyone who thinks they can close their eyes or that we don’t need to act in this space, I think that’s fanciful. I think governments world over recognise we’ve got to be able to have enforceable – some sort of enforceable framework in place. That’s what we want to consult on. I don’t want to pre-empt the consultations, if you don’t mind, because I think it is important to get views in a good faith way. But I think it’s – we have indicated we’ve taken this as a government very seriously from the get-go. I think once you saw ChatGPT release its model back in November ’22, within six months roughly of that we said, “Right, we’re going to kick off consultation on what we’ve got to do on high-risk AI.” We’ve reported along the way and we’re issuing things that from this day businesses can do right now to build confidence and trust in the use of AI through the voluntary standard, and they should get cracking. 

JOURNALIST: Yeah, Minister, just to sort of follow up on the previous question, some of the legislation has already passed and a lot of the work is already ongoing. You’re saying what’s being released today will provide greater coordination across government. But how is it being coordinated today? How do you ensure that the work that’s already ongoing? 

ED HUSIC: Because the work occurs between ministers and departments. I mean, this has been really important work that we’re releasing today to help identify risk and identify ways to manage that risk and to build trust in AI. And so it’s building awareness across the board. And some of the measures I mentioned to you earlier – notably, the AI Assurance Framework – is designed to do that. And importantly, too, it’s not just about the Australian government but it’s about the states and territories that have signed up to that assurance framework. 

I’ll go here, here and then here and then we might wrap it up. Charles. 

JOURNALIST: The Grattan Institute – 

ED HUSIC: Sorry, before we go on to separate issues, if there’s something else, then I’ll come to your question. 

JOURNALIST: Yeah, I’ve got a separate issue as well. 

ED HUSIC: Okay, who’s got AI-related questions, then we can move on? So James and Brandon; I think you’ve had a few good bites at the cherry. 

JOURNALIST: There’s been some surveys of comparable countries that suggest Australians are actually more concerned, more fearful of AI than, you know, our contemporaries. Why do you think that is? 

ED HUSIC: Look, I think I might leave it to research analysts to answer the question. I do take on board very much what you say. I think some of the work by the University of Queensland with KPMG showed that less than half of the public believes that the benefits outweigh the risks with AI. So it basically sets out the challenge – the public is concerned. We have heard that concern. We are acting on that concern, and it’s in the interests of business to act on that concern as well. We think that the work that we’re releasing today can start that process of building trust, and we’re urging businesses to take up the voluntary standard. It is in their interests to do this, and we think it’s a practical way for them to take actions now while we work out the shape of the mandatory guardrails. 

Last question on AI by Brandon. So, is there another one? Okay, one, two and then the other questions. Right. 

JOURNALIST: In terms of AI adoption, in the consultation paper released today you mention there’s an integrated approach, such as through programs like the Next Generation Graduates Program. I’ve heard from some programs that struggle to find appropriate candidates to fill those positions. What are you doing to sort of ensure that businesses can really adopt AI in an appropriate way, and especially small businesses who might not have the resourcing to [indistinct]. 

ED HUSIC: You’re raising two things, quickly, one about finding candidates suitable for a program. I might leave that to the program managers to deal with. We do want to encourage SMEs to use AI, because if it helps them work smarter, faster, better, it’s in their interests, it’s important for national productivity and it’s important to build stronger firms that will create the stronger jobs long term. And if we can encourage that uptake through what we’re doing today with the work that we’re releasing, that’s really important. 

JOURNALIST: Just out of interest, does the government use AI in any way? Does your office use it, and how? 

ED HUSIC: Absolutely. I mean, we across government are using AI. I think longer term where AI can speed up the work of government and improve the quality of work for the public, I think that that is important for us to do. I think it would be, as I said, to close your eyes or turn away from using AI doesn’t make sense. We don’t want the country to lag its international competitors who are thinking about how to use AI more and more. But what is important is the trust. Automated decision‑making is probably one likely candidate for the use of AI – that is using AI to help process applications quicker and help make a decision. You know, where it goes off – where that decision‑making process goes off the rails, there have to be protections. 

I think we’ve learned through Robodebt, for example, that there are real-life consequences if you’re making decisions where people can’t get an appeal made to your decision that they believe really impacts on them unfairly. And so, again, the guardrails imagine that. The voluntary standard expects businesses to think about that and things like the National AI Assurance Framework are designed to protect on that issue.

Now, I’m conscious there are other questions outside of AI. I am more than happy to talk AI all day, but I don’t think you are. So, I’ll start with Charles, yourself and then go down the back. 

JOURNALIST: Thanks. The Grattan Institute has a report out today involving gambling losses estimating Australians lose roughly $1,600 a year per person to gambling. The vast majority of that is through poker machines. Would you like to see the states take a bigger role in clamping down on poker machine losses given it’s something that the federal government can’t control? 

ED HUSIC: Look, the Australian government takes the issue seriously. We are working and we have announced some measures. We’re working on more. And I think if you look by the end of this term, we would have probably done more on the issue of gambling than most other federal governments. It’s not for me to speculate. I appreciate you’re inviting me to do so, but it’s not really my space to do so. I just refer you to the work that’s being done by the Social Services Minister, Amanda Rishworth, and the Communications Minister, Michelle Rowland. That’s probably a belter place to follow that up with them, if you don’t mind. 

Down the back and then to here. 

JOURNALIST: So, I know not your portfolio, but we’ve got a story this morning on the ABC about thousands of Australians being asked to pay membership fees to GP clinics to access bulk-billed medical care. I know it’s not your portfolio, but what’s your initial reaction to that, though? 

ED HUSIC: Well, I’m grateful that you started your question in the way that you did. Look, I just make the point, Medicare – we are the party of Medicare. We believe strongly in universal health care. In makes a difference in my part of the world in Western Sydney. We want people to be able to use their Medicare card to get accessible, affordable health care. 90 per cent of the GPs in my area provide bulk billing. Really important. Anything that undermines the value of bulk billing we’d be concerned about. And we’ve done some things in the course of this government, as a new government, to encourage greater uptake of bulk billing, and they’re showing signs of success. The urgent care clinics have provided a way for Medicare to be provided for people instead of sitting in an emergency department, get that help quickly. What I would say on that is I suspect that’s very much on the radar of the Health Minister and will be considered and responded to, because it’s a serious issue that’s been put forward, and I imagine it will be dealt with in due course. 

JOURNALIST: The coalition has put forward a plan that will [indistinct] investment fund for profits of nuclear power reactors to then, you know, sort of shower investment in the local communities that choose to host them. What do you make of this plan? 

ED HUSIC: Geez, all talk, little detail. I mean, that’s the problem with this coalition, is that they talk big and they want to do the – they’re sort of like the student cramming in the homework right at the last minute. You know, we’ve got an election coming. We don’t have serious policies being put forward with a lot of detail and that’s costed. On nuclear, I reckon, you know, you just can’t have a post-it note approach to policy – that is, that there’s only just some intention that’s flagged by them with very little detail. 

In terms of that, let’s see if it stacks up. Most stuff that they have announced has pretty much come undone and unravelled within 24 hours of the announcement. I’d be very interested to see the detail. Again, the coalition, their hallmark has been opposition to everything we do, say that they’ll do something, don’t provide the detail, don’t do the legwork and then have to scramble to fix up the mess afterwards. I think it’s enormously disrespectful to the Australian public the way that the coalition approaches policy. The public deserves better than that, and I think that they should be held to account for their approach. Very lazy approach to policy. 

Thanks, folks.