Press conference on action to help ensure AI is safe and responsible, APH, Canberra
ED HUSIC, MINISTER FOR INDUSTRY AND SCIENCE: Today the Albanese government’s releasing our interim response to the consultations that we’ve done around the safe and responsible use of AI. As many of you would know, artificial intelligence has helped in a lot of ways. It’s made a difference to many lives, helped solve problems faster, and opened up opportunities to get things done smarter and better.
It’s also predicted that using AI and automation will generate up to 600 billion a year for the Australian – for Australia’s GDP by 2030. AI models, they’re able to crunch large amounts of data at record speed enabling new ways, for example, of detecting cancers or optimising traffic flows through to improving the lives of those with disabilities and also changing the way we train and educate people.
But there is an issue, because the use of AI in business is patchy – bigger businesses might tend to use it a lot more than smaller ones. And there are a whole range of reasons why that might be the case – knowing how to use the technology is one thing, but there’s also a trust around the technology itself. And that low trust is becoming a handbrake against the uptake of technology. And that’s something we’ve got to confront. And at the same time we’ve got the community concern about the potential high-risk issues surrounding AI – will the technology, for example, do what’s expected? Will it cause harm?
Now, the Albanese government moved quickly last year against the backdrop of some pretty big step-change developments involving generative AI to consider various government responses. We commissioned the Prime Minister’s National Science and Technology Council to look at the way in which generative AI was developing and likely pathways for that development. And then in June ’23 we opened consultations around the safe and responsible use of AI; received over 500 submissions.
I conducted personally as Minister a number of industry roundtables and there were other mechanisms as well to consult with industry that have now framed up a number of things: one, to crystallise exactly what some of the concerns are in the broader public’s mind around the use of AI; and, two, to start setting out likely pathways for further government response.
I’m pleased to announce that that response will involve a number of initiatives that we can take immediate steps on. First, working with industry to develop a voluntary AI safety standard in the near term. We’ll also look to introduce voluntary labelling and watermarking of AI-generated material, which I’m happy to discuss through the course of our discussions today. And we’ll also set up an expert advisory group to help guide the development of mandatory guardrails. While we acknowledge that the majority of AI is relatively low risk, we are working to introducing these safeguards or rules for companies that design, develop and deploy AI in high-risk settings.
Now, these mandatory guardrails could include the testing of the products as they’re being designed and developed both before and after release, requiring transparency, openness about how those AI models have been designed and developed and what they’re intended to do, and the expectations around their performance, and we also believe there has to be an element of accountability. Where these models may work in ways that were not intended or are not in the way that they were advised, then we do need to have some elements of accountability there.
Our response will also recognise that a broad range of work is also being done by a number of my colleagues across government, for instance, in education they’ve developed the national AI schools taskforce and framework for use of generative AI in schools. We’ve established the AI in government taskforce with my colleague the Minister for Finance Senator Gallagher. And we’ve also got the Attorney-General considering the implications of AI and copyright.
And, again, I’m happy to discuss this in further detail later, we’re trying to ensure that given that the technology has developed in other countries and that it crosses borders, we’re trying to harmonise wherever we can and localise where we should so that we’ve got a framework that can work in with other regulatory approaches being taken elsewhere across the globe.
It’s also important, if I can, just to conclude by emphasising these things: we’ve taken the concerns around AI seriously. We have sought to listen widely and respond thoughtfully, being able to cooperate internationally on a complex issue and we also want to set up things to ensure that governments can respond quicker to developments in the technology as they occur. We want to get the benefits of AI while also shoring up and fencing off the risks as much as we can and design modern laws for modern technology.
And with that, I am happy to answer any questions.
JOURNALIST: Minister, in industry - there are so many unknowns. Why are you developing a voluntary AI safety standard and not a mandatory safety standard?
ED HUSIC: Well, we’re doing two things. The range of risk is, in terms of the bandwidth, is broad. And so within that bandwidth things that we can deal with quite quickly and that present lower risk, we want to be able to work with industry on establishing safety standards.
And, importantly, what industry is very keen to ensure is that those standards as much as possible apply as widely as possible too so that everyone knows it’s one in, all in. But there’ll be some things that may present a safety risk or may present a risk in terms of people’s future prospects, be it in work or in front of the law. We do need to be able to have those mandatory guardrails, as I said, that say these are the red lines that you cannot cross and that if you do present risks, we have expectations about how to manage that risk.
Okay, I’ll go here, here and here.
JOURNALIST: On the voluntary and the mandatory guidelines, what are the sort of vague timelines for those two different sets of guidelines?
ED HUSIC: Sorry?
JOURNALIST: You know, where will – how long will it take to get the voluntary standards in place and how long will it take to get the mandatory?
ED HUSIC: So, with the voluntary safety standards we’re keen to get working on those straight away. And, again, to align with the way that standards are being developed globally. For example, you would have seen coming out of the UK AI safety summit there was a commitment in, say, the UK and US jurisdictions to set up safety institutes. We have a National AI Centre and they will take the lead on the development of the voluntary safety standard and work with both industry and civil society on that. So we want to move really quickly on that.
The expert panel, we want to be able to stand that up and then get them to advise government around the mandatory guardrails and get that happening. We want this work to occur this year and as quickly as possible. We’re not saying thou shalt by this day because this is complex work and we want people to come up with a quality response that can be workable within industry and also satisfy community interest.
JOURNALIST: As you’re kind of putting together the expert advisory community and choosing, I guess, who sits on it, how do you reflect some of those kinds of widely different views within the sector, for example, some of the bigger, established companies are quite relaxed about increased regulations; some of the newer players are accusing them of wanting to entrench a kind of monopolies. How do you actually reflect that?
ED HUSIC: Yeah, that’s a good question, because that is a massive challenge and to be completely direct with you, it’s one that we will need to confront in setting up that panel and also, importantly, the response. There’ll be a lot of people that want to be involved in that group. We can’t get everyone, and I’m sure you’ll be reporting on someone that’s got their nose out of joint because we haven’t included them on. But we do want to consult widely at any rate. And for the very reasons that you’ve said – there are differing views.
What struck me at the UK summit is some of the biggest names in generative AI saying, “We can’t do this. We will need government to set up the regulatory frameworks because we can’t be expected to do it.” The motivation there being obviously they’re accountable to shareholders; they’re not going to do one thing that’s out of whack with what others are doing, so standardising, having a uniform expectation is really important.
And to the point that you emphasised in your question I think is also an important one: you don’t want to entrench dominance that’s been gained by being a first mover. So, you want to be able – and this is the other sort of - if I can put it, devilishly tricky thing: we know that there’s huge economic benefit to be gained.
We don’t want to stifle that innovation. You can still regulate and innovate. And I would say to you all that examples of that include we’ve got life-saving medicines that are being developed with very strict expectations, regulations around their development. Same, for example, in airlines. Airlines keep getting better and safer and they do that off the bedrock of some fairly solid regulation. So getting that right will be important.
We haven’t necessarily landed, if I can say, Anthony, on precisely who will be in that group yet. We will be taking advice, and I’ve asked my department to start framing that up. We’ve got to get the balance right between people from the industry, people with regulatory know-how, and also what I’ve tried to do in this space is inject the ethical considerations as well. And so there’ll be people from civil society.
But I think in finally wrapping up my very long answer to your very straightforward question, I think the message needs to be sent loud and clear: we are going through a threshold moment when it comes to regulation of technology. The whole let it rip, do what you want, you’re out there you can innovate with no boundary, I think we’ve passed that. Those days are gone.
The days of self-regulation are gone. I think what’s happening internationally is a very strong signal being sent to the tech sector, yep, we like the products that you develop, but where they come with risk there is an expectation in the community that governments will, one, identify that risk and be able to respond and contain that risk.
Josh and then Sarah.
JOURNALIST: On the copyright question, there’s a bit in the paper about – in the paper about the content that these AI train off, and I think it talks about remedies of copyright breaches and that sort of thing. Like, there are, I guess, a number of interesting cases overseas where, you know, whether there’s lawsuit or complaints from authors or The New York Times, even about how their content is sort of sucked into those models and reappropriated and, you know, spat back out. What would you see as how that sort of system would work? I mean, obviously, you know, it’s an imaginative part of the consultation and all the things that you’re doing with industry, but, like, would there be the scope for potentially, you know, not just, I guess, sort of remedy for breaches but proactive licensing agreements with, say, news outlets or animation studios or news labels and that sort of thing?
ED HUSIC: Yeah, okay. So just from where I left off with Anthony, I mean, this whole idea of permissiveness in the development of technology and the fact that we’ve crossed that threshold now, I mean, in the early days effectively that’s how Google got as strong as it did – because it was able to crawl over the internet at that point in time in the late 90s, soak up all that data, train the work it was doing internally, particularly as it was developing its own AI models, and that built it to what it is now.
Would we allow what happened in the late 90s right now, it’s embedded in your question. You’re seeing the response. Some newspaper outlets or the generators that copyrighted material are saying, “No, we actually need to be compensated and recognised for that.”
It's a bit difficult. There is a challenge there that I know my colleague the Attorney-General is very much alive to in leading these consultations, which is you have different copyright frameworks in different parts of the world. If I can use the phrase again, permissive, you’ve got a different fair use arrangements in the US around copyright that are quite different to our own. But, again, we will harmonise internationally where we can and localise where we should, and we have those protections in place.
And I think, that’s the course of work that will be contemplated by the Attorney-General. And I emphasises the point I made earlier: there is this body of work that I’m involved in right now, but there are colleagues across government that are actively contemplating how to shape up the laws that are in their portfolios to take into account developments in AI.
The Attorney-General’s is one, the Minister for Communications on online safety is another. Cyber security and the Minister for Home Affairs and Cyber Security is dealing with that, so the national security implications. So across the breadth of government, we are working and fixed on, okay, AI, the development has taken a big step change, how do we shape up government laws to suit community expectations as well and some of the things that you’ve raised in your questions.
Sarah.
JOURNALIST: Thanks, Minister, We did ask ChatGPT about what questions to ask you, if the review is just another bureaucratic –
ED HUSIC: Did they help you today?
JOURNALIST: Did they help me? No. But it talked about if it’s another bureaucratic exercise or generally has practical actions. If I could put it a better way and give my job meaning, I guess –
ED HUSIC: Sure.
JOURNALIST: Is Australia, do you think, behind on the game when it comes to regulating? That sort of, you know, the EU passed its AI act in December, South Korea, Canada, like, in terms of internationally, we’ve had a year now. Today we’re getting at, you know consultations and panels and experts. But should we have already put something in place, have some draft legislation? Are we behind the rest of the world?
ED HUSIC: Well to be fair, I think the world has been grappling with this massive development. I mean, open AI has been around for many years developing ChatGPT. When they released it in November 2022, that was generation 3 of that technology, and they’ve also released other generations of that tech. If you look at the international response, yes, the EU is going to be developing its own specific act, but it will still be looking to have voluntary arrangements through the course of this year before it enacts the legislation.
The Canadians are working in things but they’re still working through the process of their own legislation as well but also working with industry on that. The US, they announced a series of moves tethered to the Defence Production Act in large part through the executive orders that President Biden issued just in late October, early November. So the world is trying to work out the appropriate response to this, bearing in mind we’re threading not one needle but multiple needles at the same time here with the way that the technology has developed and to get that balance right.
We want AI to be applied in ways that provide us benefit. If they can develop new medicines, if they can improve the way communities work, if they open up opportunities that hasn’t presented us or solve problems, we want that kind of innovation to occur. But if they’re going to present a safety risk, then government has an obligation to respond, and that’s what we’re setting up with the Albanese government’s approach – with voluntary safety standards initially while we shape up the mandatory guardrails that will be announced this year to get that right.
And that’s broadly in line, I think, with what the international community is doing. And we’ll maintain – can I just emphasise – dialogue with international partners on this too. And while people may think we’re ruling things in or out, can I just emphasise: we are maintaining an open mind as to the shape of regulation. There may be some of the best things that happen within the EU, US, UK, or other responses that we comply and build on ourselves, and, dare I say, that is the pathway for most innovations – which is to take what’s there now and build on it and improve it. And that’s exactly what we want to do.
JOURNALIST: Minister, notwithstanding the enormous benefits from AI in terms of innovation, AI also looms as the modern job killer. So you could have a situation where Judge Judy is an AI-generated avatar. What guardrails would you like to see around jobs being protected to ensure that human judgement, emotion are reflected in decisions of the world?
ED HUSIC: There’s a tendency to think that AI has just crept up, you know, overnight. And it’s been developed since the end of the Second World War. It’s had decades to have this run-up. During that course of time, automation affected job outcomes most notably in agriculture and manufacturing. And when there are new ways to do things it will change the way people perform their jobs.
There is obviously, a real impact on the way in which people perform their work. In times past automation had a very heavy impact on blue-collar work. AI now is making – imposing a big challenge to white-collar work in a way it never has before. We’ve seen that in financial markets, we’ve seen that in accounting, we’ve seen that in the way in which, care is provided as well. You referenced in terms of chatbots, in that case, the legal context, but in other ways too.
And this is going to require us working with industry and recognising too in the Australian context and what I’ve been concerned about for some time, Andrew, is that people – businesses will onboard new ways of getting things done using the latest technology and not think about the worker impact. And it is, incumbent on businesses too to think through when they’re using technology what it does to their workforce in their organisations as well.
Again, this is something I’ve mentioned before in my portfolio, talking about this. I know that my colleague the Employment and Workplace Relations Minister Tony Burke is thinking about this and will be doing – will be considering further work in this space in due course. The predictions – you said, it’s looming as a job killer. AI and technology has loomed as a job killer for a number of decades, and the predictions vary, can I say.
The OECD at one point had a prediction of 47 per cent of jobs would be affected by automation. It hasn’t necessarily played out that way. And I think what it does require is for us all – industry, government – to think ahead about the impact of technology on work. It’s why, for example, we’re modernising our training systems, we’re encouraging a lot more people to be able to be trained up in vocational and tertiary education, opening up those pathways for more education. Because the human capital side, the skills that we had for workers, the demands on us will be more complex and so we’ve got to train people up. And that’s got to happen with both government and industry working together.
JOURNALIST: In terms of the mandatory guidelines, has the government actually started drafting any legislation? And also in terms of the high-risk uses, can you give an indication who would be held accountable for breaches?
ED HUSIC: Sure. So the reason we’re setting up that group is to start sketching out the shape, the form of those mandatory guardrails. And so that work will be done once we appoint them. In terms of some of the higher risk, I think, again, we’ll take on board the advice of that group. We’ll be setting them up so there’s no point us pre-empting their work before they get a chance to get cracking on to it. But I think what people are concerned about is that the technology gets ahead of itself and has an impact on safety, for example.
So AI is fundamental in the operation of driverless vehicles, for example, and we’ve seen – there was a lot of excitement about the prospect of, self-driving vehicles. But then we’ve seen they don’t necessarily work in the way that was promised. And so people do expect that the safety elements get considered.
Also, the way in which people get hired or maybe fired using an AI-based model. We’ve found that biases - the technology is not something wonderful that occurs and operates all on its own. It’s driven off data that’s inputted and provided by people. People shape the way the technology is designed, developed, and used. And so, if it has an impact on the way that you are hired or the way that you are fired, for example, that could have an impact that, we would want – and I think the community context – we identify and deal with. You’re seeing it in the UK, for example, in terms of the UK Post Office, the use of AI model in that case that led to the dismissal of employees that are now having to be - they have to reconsider the way that people were treated. And so getting these things right is important. And I think that goes to the heart of why we need some of this stuff.
JOURNALIST: Minister, on real-time facial recognition and some of these other technologies that have been banned overseas, your interim response notes submissions calling for bans here. It doesn’t come down either way. You talk about seeking to harmonise with, you know, other structures such as the EU.
ED HUSIC: Yeah.
JOURNALIST: Ultimately do you expect we’ll follow with banning some of these riskiest technologies, or are you hoping to avoid hard bans?
ED HUSIC: No, I just need to emphasise again: I’m keeping a very open mind as to the nature of the regulatory response and the definition of high risk that will be provided to us through the work of the expert group. You’re right – the EU has taken a very strong approach on the way in which facial recognition is used in certain ways. And, again, if we get advice on that that it has to be dealt with in that way, then we’ll obviously take on board that advice and respond accordingly.
JOURNALIST: Just with what you were saying in terms of the, you know, EU and UK and everyone have gone and developed legislation and it’s great we can learn from it –
ED HUSIC: Yep.
JOURNALIST: But isn’t that sort of opening up us to a risk – we’re sort of waiting on everyone else does. Their legislating. We will say the best, which is great - but AI is developing so fast. Like, is that the software - software approach the best thing when it comes to just how fast this technology is developing? Shouldn’t we be acting more urgently?
ED HUSIC: Sorry, I feel you’ve got an impression in your mind that we are going to do a wait and see approach, can I just say that’s not the approach we are taking. We’ve stepped out two main ways in which we are responding in the response.
One is working with industry on a voluntary safety standard for lower-risk areas and then we’ll develop the mandatory one. That work is going to start from this point onwards. That work is going to begin. We will, though, take note of the way in which developments are occurring in other parts of the world and make sure that where we have to align – because the technology crosses borders. People know that they can access – in the most simple terms, you can access from your – from the app store of your phone products that have been, you know, agreed to in different parts of the world and that might be available to us.
You can use those right now. So we have to be conscious of the fact that that technology is accessible. And so we want to be able to harmonise where we can. That work starts now. No wait and see. But where we can improve and build on what’s happening in other parts of the world, we want to remain open-minded about that.
JOURNALIST: Minister, a follow-up one on Sarah’s question: it sounds like today is the final framework of our AI regulation. So, that includes the mandatory reductions for it won’t be ready this year and possibly next year. Now, Australia has a world-class research sector. I mean, are we potentially hampering us in terms of giving AI researchers the security they need in terms of knowing what regulations will be around their research?
ED HUSIC: Well, I don’t know where you got that I suggested that it will be next year, because I want to get cracking on to this this year. So let’s be on the same ground there.
JOURNALIST: [Indistinct] when the communities have reported back, when you guys have legislated a mandatory regulation [indistinct] –
ED HUSIC: Sure, but, I mean, I didn’t suggest that this would go into next year. But obviously there may be different elements of work that – let’s wait and see. But our approach is to get cracking now and I pick up on what I said to Sarah: this is, happening from here and now. So we do want to get moving on that.
Your point about the researchers being able to understand where things are going is absolutely right, but there are also researchers that recognise too that we’ve got to get this right as well. They want to be able to innovate. But they recognise too that having a solid platform of well-understood regulation is really important as well.
JOURNALIST: When you kicked off this process, you spoke about the serious risk and the opportunities of AI. I was just wondering in a very general sense whether you can talk about whether your views – your general view towards AI has evolved over the last year or so since you kicked this process off? Have you become more optimistic about the opportunities and less pessimistic about the risks, for example, as you’ve kind of delved into this?
ED HUSIC: I’m definitely a big believer in the value of AI in terms of contributing to improved quality of life, and I think it’s going to be important in the Australian context to improve productivity and the performance of our firms. Longer term, economic successful countries will in large part be dependent on how they use technology.
But low-trust issues that I highlighted earlier around some small and medium enterprises being a bit wary about using AI is part of the reason why, for example, we’ve opened up funding and set up AI adoption centres – we announced that in December – to work with small and medium enterprises to use technology to sharpen the way they work, lower their costs to be able to do more with less in some instances.
It’s really important. It’s why we developed a National Quantum Strategy, it’s why we’re working on a robotics and automation strategy. I want our country to have the strongest economy possible using technology to drive it and to improve productivity, which you all know is an issue that has bedevilled and been a challenge for this country for quite some time. We’ve got to be able to find ways to deal with that.
But to the other part of your question – because there’s light and shade on your question – the biggest thing that concerns me around generative AI is just the huge explosion of synthetic data, the way in which generative AI can just create stuff you think is real, organically developed and it’s come out of a generative model.
And the big thing that I’m concerned about is not that robots take over but that disinformation does. People need to have – part of the reason of why we’re looking at watermarking is people need to know that the stuff that’s before them on their phones or their TV screens or on their computers is legitimate and if it’s been generated or created through generative AI I think they deserve to know that. And industry now recognises that’s an issue, and we do need to attend to that.
And it certainly was reinforced at that UK AI safety summit where I had people from the sector and from civil society show how generative AI can create some things that then gets picked up and, widely distributed through social media and becomes something that potentially triggers government response. I think we all recognise the threats, the perils that present themselves if a government response is based on something that is not legitimate. And so the biggest thing I think we need to focus on is generative AI, that what we’re seeing is clearly understands that it’s been created through a pathway that’s not organic.
JOURNALIST: With the election, just on that disinformation –
ED HUSIC: Well, half the world is going to an election this year.
JOURNALIST: Exactly. But when the example was brought to the Ministers, was it in relation to disinformation regarding politics?
ED HUSIC: Yeah, I think that’s – that was one area where things could be said. Either we saw this through the course of the pandemic, for example, and the sort of views around how to respond to the pandemic through to what might happen in political contexts.
We saw a bit of a preview of that in the 2016 US election. You’d be aware of some of the examples used. So that is something in the context of 2024 where I think roughly half the world’s population is going to an election, it’s in part one of the motivators as to why the international community is moving the way it is.
And before I go to someone, I just want to – for people who haven’t asked a question.
JOURNALIST: Just on the labelling or watermarking, given how important it is, given what you’ve just said about concerns on disinformation, and given how easy it is for people to make their own models or find some that are less scrupulous that can just dodge and label watermarking standards, is it inevitable that these will become mandatory requirements of any model?
ED HUSIC: Yeah, look, I think the technology will evolve. You will understand that. And that while a lot of people want to use the technology for good there’s always going to be someone motivated with ill-will, bad intent, and we’re going to have to shape our laws accordingly. And so if it does require a more mandatory response then we’ll do so.
I’m going to go to yourself as the last question on AI and then I’ll answer questions outside the portfolio.
JOURNALIST: Minister, just to clear up the confusion, you say the work starts today. Do you have a month in mind along we might see this legislation if it’s not next year?
ED HUSIC: I don’t want to sort of tie the hands of the expert advisory group. Clearly they’ll know that we’re very keen to see this move as quickly as possible. They’ve seen developments occur in the global community. We do need to start developing responses on this. But you can be assured that this will be the big focus work for this year.
Okay, then I think you had a question outside the portfolio?
JOURNALIST: Yes, so just on Israel and Palestine, you’ve been very outspoken about preventing the suffering on both sides in Israel and Gaza. Australia did not make a submission to the ICJ against Israel, but in its submission against – in support of Ukraine it said that it strongly supported efforts to uphold the genocide convention. Do you think there’s a double standard here that’s been imposed because of political sensitivities?
ED HUSIC: If you can appreciate, I’m not responsible for decisions with respect to the Australian government’s engagement with ICG matters. And the Foreign Minister has spoken on that, so I might refer you to her.
I know there’ll be some people who will be keen to get a take, given some of my previous comments, on Gaza. I just want to emphasise this: what we want to see is we want to see Israeli families and Palestinian families live in peace, grow old, do so safely, see their communities do really well. Being able to encourage peace to emerge in that part of the world I think is not just a priority for them but I think the global community has an expectation around that.
We need to be able to get – the way that this response to Hamas’s terrible attacks on October 7, as the Foreign Minister has emphasised a number of times, it matters. But it matters that innocent lives are protected. It’s one of the points that’s being raised by the Foreign Minister. The four sort of objectives of the Foreign Minister’s visit to Israel, one of them is obviously respecting international law and preserving the safety of citizens, increasing the flow of humanitarian aid, making sure that this doesn’t escalate into a broader regional conflict, and finally, we genuinely see emerge the two-state solution that been talked about for so long and hasn’t happened.
That’s going to have to occur. No ifs or buts.I think the global community now recognises not only does Israel have a right to live peacefully and free of violence but Palestinians should have their own state. And that will come with responsibilities and expectations on Palestinian people as well.
We longer term need to have that. I know you want to pepper me with a number of different questions, but I think effectively at the bottom of it all I think, Palestinians, innocent Palestinians, have paid a high price for the barbarity of Hamas. We need to get beyond that, get to peace, and ensure that people can enjoy a standard of living that the rest of us very much appreciate and thrive under.
JOURNALIST: You’ve previously said the discourse around this is modern-day McCarthyism. Just in light of the secret WhatsApp messages that showed a coordinated campaign from the ABC. Are you concerned about this conduct by the public broadcaster, and would you see a potential action from it?
ED HUSIC: Let me put it this way: as a cabinet minister responding to things that are now subject to legal action makes it a bit difficult for me. So if you don’t mind, I think I’ll – the wiser course is to let that run its course. But I do want people – you know, one of the thing we value about our democracy is the ability to express our opinion on what we think should happen. You know, there is always room for opinion in the public square. There is no room for hate speech, no room to create that division where people feel, based on their faith – either Jewish or Islamic – that it’s unsafe for them. That has to be dealt with. The way we conduct ourselves through the course of this debate is important. I think that speaks as much to parliamentarians as it does in the broader public. And people shouldn’t feel like if they express their views that in some way, shape or form that there’s some sort of – if it’s been peaceful and if it conforms to what we think is acceptable in a democratic country, they shouldn’t feel like their jobs are on the line as a result of it. So I think getting that balance, I appreciate – while it’s easy for me to say this; it’s a different thing in practice – but I think it’s something we should consider.
JOURNALIST: So, Minister, is – do you believe that the re-posting of the Human Rights Watch, the post on Instagram, is a sackable offence?
ED HUSIC: Again, that goes to the heart of –
JOURNALIST: It does, but –
ED HUSIC: I will be very careful because we will have people work that out in the context of legal action. And I don’t think it’s up for me to make an interpretation of employment law or company practices in that case or corporation practices. So, again, with respect, I might decline to add anything further on that given the context.
Okay, thank you for your time.