Speech at the launch of the Human Technology Institute at UTS

University of Technology Sydney
E&OE

Good evening, everyone, and thank you for that welcome. 

I want to begin by acknowledging the Gadigal people of the Eora Nation, the traditional custodians for this land and pay my respects to elders, past and present. 

As we consider the human aspects of technology, it's worth reflecting on the role that knowledge and science played in First Nations societies before colonisation, and how First Nations people have retained custodianship of that knowledge against great odds and facing great hardship and we owe them a debt of gratitude. 

I’d like to pay my respects to any First Nations people in the audience today. 

If I may also begin by just acknowledging, if I may, someone I’ve known for many years, from our younger years – they weren’t quite that long ago – Verity Firth, now Pro Vice-Chancellor, all fancy now [indistinct]. [We are a long way from] when we were hurling insults across the room of a Young Labor executive and we thought we knew everything and found out we didn’t. 

Professor Leslie Hitchens, again, thank you very much. Professor Nick Davis, Professor Ed Santow as well, and I'll come back and make reference to Ed. 

If I can, to Larry Marshall, CSIRO. G’day, Larry. And also my friend and colleague who's way down the back. We geotagged him, Andrew Charlton, seemed appropriate to use a technology reference that fell flat just then.

G’day Andrew. Now the Member for Parramatta, someone who is steeped in this area, who has watched the impact of technology, particularly in our economy, but also in our community, and it's great to be here with Andrew tonight. It means a great deal to me personally to be a part of this launch of the UTS Human Technology Institute this evening and a pleasure to be here.

Recently I have been reading a book by the great Barry Jones, who was Science Minister during the Hawke years and one of this country’s living treasures.

It’s called What is to be Done, and in it Barry reflects on the relationship between politics and science as we confront the biggest issues of the 21st century. 

How to translate the expertise of our scientists and researchers into policies that can be taken up by governments, on topics like climate change and technology transformation.

How to take abstract, complex topics and make them real for everyday Australians. 

And how challenging it can be, even when the path ahead is clear, to convince people that change is necessary to ensure the continued prosperity and well-being of our country.

In his book, Barry remarks that “in politics, timing is (almost) everything. The best time to raise an issue is about ten minutes before its importance becomes blindingly obvious to the community generally.”

I tell you what, I had to laugh a little when I read that line. And maybe a little cry.

Around the room tonight, I see people who've been talking about the importance of making sure technologies, like AI, are designed in ways to benefit us and to reflect our best and most inclusive values. 

You’ve been doing that for years and some of you I've known since the beginning of my political career. Years before AI, and topics like ethics and responsibility were mainstream news, you were having these conversations with me. 

And I was thinking as I was coming here tonight, it was about five years ago I was arguing as a shadow minister that Australia had a great opportunity to take the leadership role around the issue of applying ethics to AI. 

So, unlike Barry, it was five years instead of five minutes, but it still was something that was worth pursuing and it was through that work that I got to actually start talking with Ed [Santow] when he was at the Australian Human Rights Commission, because we could see that AI would transform many of areas of our life over a great period of time. 

It's no longer the stuff of science fiction. Today, computers parsing and learning from huge volumes of data, using nearly every sector of the economy, helping us model the impact of climate change, developing lifesaving vaccines, treating disease, managing transport. 

It's also more and more involved in our personal lives, deciding what ads are targeted to us individually, what TV shows we should watch next. 

I don't know about you, I hate the networks’ algorithms. I never watch Netflix in front of anyone because of this 

AI can even suggest who our romantic partners should be. Not going to touch that in a million years. 

Recently, AI systems have reached new heights in the imitation of human art, literature, video. And I'm sure many of you have seen and played with these large text and image models online, like DALL E or NightCafe sites. And these advances, places on the brink of yet another significant information transformation, that could change the way we work, learn and create – for good and bad. 

I know these developments are not new to you and sometimes it must feel like change comes incredibly slowly. 

At the same time, I see that change is coming. Our conversations regarding AI have evolved. It's important to remember that change happens at different paces for different people and different groups of people. And I know you all think about that very deeply. 

People are excited and concerned about the potentials of AI, sometimes in equal measure, and that is to be expected. 

We are beginning to better appreciate its risks, as well as its potential benefits. 

People want to have conversations about issues like bias in AI, and that AI could entrench inequality instead of reducing it. 

And so Ed and Nick. I like your timing. 

It feels like we're perfectly positioned to learn from and act on the insights of people, showing us how technologies can be designed in ways that help us, not just hurt us. You know, I've had a long-standing interest in Australian tech. 

I've been vocal in wanting Australia to be a maker, not just a taker of new technologies. And I've never been an advocate for technology just for its own sake. 

I want the technologies we design and use in this country to contribute to our national wellbeing, not just economic, but social as well. I want them to reflect our ideals as an egalitarian nation and a proudly multicultural one.

This is why it's exciting to be with you launching the Human Technology Institute, whose vision is for “a future that applies human values to new technology”, and that is a vision shared by our new government. 

And I was pleased to see that HTI's work would be structured around three interconnected labs: skills, tools and policies which are important pillars, and they're practical pillars that have underpinned my approach to technology and science since becoming Minister for Industry and Science. 

First, I want to briefly talk about skills because if we are going to design technologies that reflect human values, then they need to be designed by the broadest possible group of humans. 

Women right now remain seriously underrepresented in STEM careers in Australia, making up just 16 per cent of people with STEM qualifications. First Nations people, only half a percent hold STEM qualifications at university level. 

And yet years of government programs attempting to attract and support women and underrepresented groups to STEM careers have barely move the needle. 

This is why I've commissioned an independent review of existing government programs aimed at improving diversity in STEM to find out what's working and what's not. It won't just consider these programs, it will look nationally and internationally for examples of best practice to learn from. 

And it'll take into account structural and cultural barriers that make it hard for women and historically underrepresented groups to enter and thrive in STEM careers. 

Understanding these barriers are critically important for building AI that responds to real people, not just the types of people who traditionally design new technologies. Soon we'll be releasing the terms of reference for this review and announcing a panel of three eminent Australians from our science, technology and engineering sectors who will oversee it. 

Australia's tech industry has experienced enormous growth in the past decade, creating around 100 tech companies valued over $100 million. And a recent report noted that the tech industry has become the seventh largest employer, with one in 16 Australians working in tech sector jobs and tech’s contributing just a shade over 8 per cent to our national GDP. 

This means there's a gap that's opening up between the growth of the tech sector and the human skills needed to power it. Which is why we're committed to seeing 1.2 million tech-related jobs in this country by 2030, backed by our commitment to opening up additional TAFE, VET places, additional university places as well. 

But I’m conscious we need to widen our sense of what a tech job is if we're going to make technologies that reflect our human values, then we are going to need to call on a broader set of skills and know-how about humans and society. 

It's not just about software engineers and data analysts. We're going to need more anthropologists and historians, more designers, more product managers, more lawyers and policymakers who will be part of designing and managing tech. 

Human creativity and empathy is only going to become more important to our future in technology, not less. 

We are fortunate in this country to be able to draw on the skills and wisdom of the oldest continuing civilisation on Earth. Some of our First Nations technologists and designers led the world in offering up different kinds of future with AI, one infused with Indigenous ways of being and thinking. And I've particularly been inspired by the work of Professor Angie Abdilla, founder of ‘Old Ways, New’, who, together with people including Tyson Yunkaporta, Megan Kelleher and Rick Shaw, recently developed a set of Indigenous protocols for AI. 

A future of AI informed by human values must necessarily accept that human values are diverse, informed by place, culture and experience. Australia has the potential to lead the way in bringing an ethical lens to technology development. We were one of the first countries to develop an ethics principle for the use of AI, and we are a signatory to the OECD's AI Principles and the frameworks encourage organisations to reflect ethical practices and good governance when developing and using AI. And now they're being embedded across areas of the Commonwealth Public Service and in businesses across the country. 

But I'm determined that we go further than ethics principles. I want Australia to become the world leader in responsible AI. This includes setting reasonable regulations and standards supporting the development of new kinds of skills that enable responsible innovation. 

For citizens to have trust in AI, those organisations in industry and in government who are designing and using AI have got to be worthy of that trust. Regulations and standards will be core to providing citizens with confidence that technologies like AI are being developed in ways that's trusted, secure and to their benefit. They've also got to provide business and government with increased certainty about benefits and risks for deploying technologies. 

Our existing legal frameworks need to be fit for purpose, which is why the Department I head up has been undertaking a review to ensure regulatory settings reflect community expectations of trustworthy responsible AI. Given the range of systems and services in our lives, these challenges are being considered across government and will be done and continue to do so through the Attorney General's Privacy Act Review. And as Verity [Firth] mentioned, with respect to Robodebt, the Royal Commission that’s being managed at this point in time. And at the Commonwealth level, the Privacy Act Review is considering the privacy implications of AI as well. And the Robodebt Royal Commission could also make recommendations to improve the implementation of automated decision-making across government. 

We're taking an active role in increasing Australia's influence on the design and use of technology standards globally. 

I just want to make a few final points if I may. We are very much looking at the way in which we can drive the embrace of technology, do it in a responsible way, make sure that when people are there at the point at which they need support, we will have a government actively doing that. 

There are some initiatives and a huge amount of work that's being done around quantum. We have this great opportunity to be a world leader in quantum technologies, but we want to make sure that in doing so, we think of all these things early on. Which is why I'm enormously grateful that Ed Santow sits on the National Quantum Advisory Committee so that we've got ethical consideration at the point at which we're developing the technology. And that's huge too. 

The point I would end on is, having watched technology as a parliamentarian for over 10 years, it's really hard to get the balance right, because when the buzz is on about technology, everyone piles on, and when something goes wrong with technology, everyone gets the buzzsaw out. 

And being able to ensure that we don't go headlong in terms of embracing technology without considering consequence – this is a live and constant issue of getting that balance right.

But the reason why I wanted to be involved in this, if you don't mind me singling out Ed [Santow], is because, in particular, having known him for years, he’s thought about this deeply, and the work of what you're doing here today can play a very, very important role in making sure the government gets the balance right. 

Because we need to build faith that technology is being used not just for its own sake, but for our sake. 

So, to you all, I say thank you very much for this work. It is exceptionally important work to do and I wish you all the very best with your endeavours and what you're able to achieve for the sake of the nation.

ENDS