Unlocking the potential of AI for Australian industry

Sydney
E&OE

I would like to acknowledge the traditional owners of the land on which we meet today, the Gadigal people, and I pay my respects to the Elders both past and present.

I extend that acknowledgement and respect to any Aboriginal and Torres Strait Islanders in the audience today.

Thank you Mohit for inviting me to speak to the Mindfields Automation Summit today. 

Your summits are always thought-provoking. An opportunity to test new ideas and challenge conventional thinking.

And this year’s summit theme – how to enhance productivity in the digital era – is incredibly timely.

It’s a question that is front of mind for our government.

As many of you are aware, productivity growth has slowed not just in Australia but in most advanced economies over the last twenty years.

And this has happened in defiance of the usually recommended antidote to low productivity.

Despite the fact that the pace of technological development and change has accelerated, productivity has refused to budge.

In the same twenty years, we’ve experienced huge shifts in the way we work and live using digital technologies.

We moved from computers in our offices and homes to computers in our back pockets. 

Search engines transformed the way we accessed information. 

Social media platforms transformed the way we communicate with each other.

Low-orbit satellites and network upgrades massively expanded our internet connectivity. 

And during the pandemic, digital technologies supported many different people in different occupations finding new ways of working together.

An abundance of digitally-created data, combined with increases in computing power, has enabled the rise of new forms of automation, including AI.

Conventional wisdom says that adopting new technologies drives productivity growth. 

So why have the last twenty years, when we’ve seen so many changes in the way technologies shape our lives, not resulted in the productivity gains we’d expect?

There’s a quote that pops into my mind every time I think of this conundrum.

In 1987 economist and Nobel Laureate Robert Solow famously said that “the computer age is everywhere except in productivity statistics.”

This conundrum – that increases in investment in information technologies doesn’t seem to correlate with increases in output at a national level – has become known as Solow’s Paradox.

And for years, economists have been trying to make sense of why this might be the case. 

Part of it is no doubt the way we measure productivity. 

That some people are more likely to treat technology change as a cost, instead of an investment.

We don’t always value a business upgrading the way it works to harness automation the same way we value a business upgrading physical equipment or floor space. 

This has flow on effects not just in productivity statistics, but also in the way businesses value adopting new technologies. 

Whether they see it as an investment, or a cost.

And it’s also true that seeing the benefits from new technologies takes time. 

There’s a lag between intangible investments in things like skills building and business re-organisation, and the benefits that eventually manifest.

But Solow made his famous observation nearly 40 years ago: before the birth of the World Wide Web; before iPhones; before search engines; before the explosion of connected devices and services that surround us.

So how do we make sure we harness these benefits?

And at a time when using technology to enhance business operations is not a nice to have option – it’s going to be crucial for long term growth.

AI and automation – the productivity opportunity

Today I want to focus on AI and automation. 

These have been priorities of mine since entering Parliament. 

If we get the settings right, it’s been estimated by McKinsey that the adoption of automation technologies could add an additional 170 billion to 600 billion dollars per annum to our GDP by 2030.

And could increase productivity growth by 50 to 150 per cent.

That is why I committed to developing Australia’s first national robotics and automation strategy.

That strategy is in its final stages of development.

And it outlines the ways in which these technologies are already being used in nearly every sector:

Harvesting crops. Monitoring environmental pests. 

Boosting our advanced manufacturing capabilities.

Reducing dangerous work humans used to do: on mine sites and oil rigs, or inspecting buildings and power lines. 

Because understanding the role robotics and automation plays in our economy isn’t just about building bigger robotics industries. 

It’s shifting public perceptions about what robots and automated technologies are, and who they’re for.

They’re not just big human looking robots, or jumping robot dogs.

They’re drones being flown by lifeguards to help monitor shark activity near beaches. 

They’re smart devices helping people stay in their own homes for longer as they get older.

In 2017, Alphabeta noted that over the past 15 years Australian reduced the amount time spent on physical and routine tasks by 2 hours a week. 

They pinned down a possible productivity improvement of 8 per cent by 2030. 

Advances in generative AI – AI products capable of producing novel text and images – have expanded productivity possibilities further. 

This year, research published by Microsoft and the Tech Council of Australia found that generative AI could automate or augment up to 44 per cent of the tasks being undertaken by workers across the economy. 

We’ve all seen how consumer-facing products like GPT-4 are capable of drafting emails and letters, and generating business reports.

They’re also changing computer programming. Since its broader public launch in June 2022, GitHub’s Co-Pilot has generated over 3 billion accepted lines of code and now generates almost 50 per cent of a developer’s code.  

That’s massive. That upends ideas we’ve had in the past about the skills everybody would need to take advantage of digital technologies.

It wasn’t so long ago some stakeholders were arguing that learning to code was essential for every person engaging with digital technologies.

Now, it’s clear that what’s more important are foundational numeracy and literacy skills, and problem-solving skills. 

Having the flexibility and creativity to get the most out of automation, depending on the type of setting you’re in.

It’s something that will be canvassed in the soon to be released Employment White Paper.

This is why together with colleagues including Treasurer Jim Chalmers, and Minister O’Connor and 
Minister Clare, this Government is focused on ensuring Australians have the mix of skills they need to take advantage of new technologies in future. 

Because Australia will only see the benefits of AI in productivity statistics if we can boost adoption. 
AI and productivity – the challenge

Australian businesses have historically been lagging adopters of automation technologies.

A 2022 Boston Consulting Group survey saw Australian companies and government agencies score their own AI maturity at an average of 3.5 – out of 10 (BCG 2022).

We won’t see an uptick in AI adoption rates in Australia simply by telling people how good technology is and how great the benefits could be. 

Investing in AI means investing in people – in the skills and training they need work alongside technologies, and in ways that deliver benefits for them. 

As managers and leaders, you know that this means skills for you too.

The Productivity Commission noted in their five year report, Advancing Prosperity released earlier this year, that limited management tech capabilities are likely holding back adoption.

They said, 
“firms with stronger management are more likely to make good decisions about whether or not to adopt new technologies, and when and how intensively to adopt them.”

There is a reluctance among the nation’s SMEs to embrace tech like AI. 

Questions about cost, complexity and value. 

All types of challenges that will be the focus of the Government’s Responsible AI Adopt program, providing $17million for new centres giving SMEs support and training to make more informed decisions about using AI to improve their business. 

Making sure the right organisational processes are in place to support adoption. 

Making smart decisions about where to focus automation – where there are low hanging fruit.

It’s why we’ve expanded the remit of the National AI Centre, doing important research and providing leadership for AI industry around Australia. 

And it’s not just industry. This government is committed to government being an exemplar user of AI.

That’s why earlier this week, together with my colleague Minister Gallagher, we announced a joint AI taskforce focused on lifting AI capability in the public service and ensuring its safe and responsible adoption.

Because we need to address trust, as another serious barrier to adoption. 

A recent University of Queensland and KPMG Australia study found only 40 per cent of Australians trust the use of AI at work.

Even fewer Australians – 35 per cent – think there are enough safeguards for AI.

Across five countries surveyed, the overwhelming majority of citizens – more than 80% - believed AI regulation was required.

Tellingly, the same survey noted Australians were most unwilling to rely on AI out of all the countries surveyed.

If you don’t trust it, you’re not going to use it.

This is why I released a discussion paper on safe and responsible AI in Australia earlier this year. 

Because if we haven’t got the guardrails right supporting people using AI, Australia won’t see the benefits. 

And today we’re publishing submissions to that consultation, as the government continues to consider options to strengthen those guardrails.

We received over 500 submissions, with the largest proportion being from people writing in a personal capacity.

This reflects what the surveys told us: citizens care a lot about AI guardrails. 

So what did the submissions say?

Nearly every submission agreed that getting the guardrails right is about more than just creating new laws.

It also means investing in standards development, in capability building and education, and in coordinating and upskilling existing regulators and policy makers.

These are all areas of work the government is pursuing, from increasing funding for Australian tech experts to participate in international standards setting forums, to creating a public sector AI taskforce with Minister Gallagher. 

Whether new laws were needed, on top of these things, drew a more divided response.

Most tech industry submissions concluded that updating existing laws would be more effective than introducing new laws specifically for AI developers and users. 

They pointed out – as our discussion paper noted – that there are many laws that already influence AI development. 

Other submissions noted that many of these existing laws have gaps. 

The Centre for Automated Decision Making and Society, a network of nine Australian universities with more than 80 researchers, identified at least 10 existing legal frameworks that in their view are out of step with AI challenges.

Administrative law, copyright law, privacy, political advertising and campaign laws, consumer protection, rules for financial advisers and lawyers, and in the medical sector…

Consumer and human rights groups, on the other hand, and members of the public, tended to support new laws for AI developers and users.  

It’s not hard to imagine that governments will increasingly question the thinking of firms at the design phase of software and hardware development. 

The speed and scale of AI diffusion, particularly with the rise of generative AI, was raised by many responding as posing a distinct new set of challenges.

Gartner expects that by 2025, 30 per cent of outbound marketing messages from large organisations will be generated by artificial intelligence.

European law enforcement group Europol estimated this year that as much as 90 percent of online content may be synthetically generated by 2026. 

That presents an enormous issue for people figuring out what’s AI-generated and what’s not – and whether content has been checked by humans - in nearly every setting:

On the news, in online reviews, social media platforms, in conversations with banks and doctors and financial advisers. 

And, while the possible productivity boost for people using generative AI to cut down on tasks is known, we have to be careful that an explosion in cheap, artificially generated content doesn’t cancel it out - with people spending more time battling information overload.

Watermarking or labelling of AI-generated material was identified by many as a possible response to this issue.  

From Australia Post using AI to improve the accuracy of delivery predictions and extract addresses from parcels, to Visa using AI to combat financial fraud, many submissions also talked about how AI is being used positively.

And I had a lot of stakeholders say to me throughout consultations that any new laws need to make sure they don’t stifle positive and practical uses.  

That’s why the discussion paper focused on what might be needed for higher risk settings. 

Settings where AI could endanger people’s health and wellbeing, or future opportunities. 

And overwhelmingly, submissions expressed concern that companies and users of AI in these types of settings didn’t have enough guardrails.   

That there wasn’t enough transparency – for example, about how and when AI was being used and how a model had been developed.

That there weren’t enough requirements for testing – testing AI systems before they’re sold or published, and by users on an ongoing basis in high risk contexts 

And that there weren’t sufficient markers of trust – like certificates or licences - for designers or users of AI in high-risk settings.

My department is currently working through thoughtful feedback in submissions on the best form of a regulatory framework for Australia.

From a risk-based framework, as originally proposed, versions of which are currently being pursued in Canada and the EU, to alternatives like a principles-based regulatory framework.

We will be continuing to explore forms new AI safeguards and regulation could take. 

We will be taking lessons from past technology rollouts and where we as regulators could have done better. 

I have been very clear.

We want to get this right.

As Minister for Industry and Science, and a long-standing tech advocate, I want safeguards that promote smart thinking and know-how.

I also believe that well-designed safeguards can be a springboard for sharper ways of doing things – stronger firms, stronger jobs. 

One of the common themes from consultations, particularly from users of AI, was that a lack of certainty about AI standards and safety was preventing businesses relying on these kinds of services.

I also know that we can design regulation providing a platform for innovation while protecting Australians – our communities and our national wellbeing.

This is not an either/or. 

The root of this debate isn’t, should we regulate AI?

It is, in what circumstances should we expect people and organisations developing and using AI to have appropriate safeguards in place?

For example, in high risk settings that could impact on people’s lives.

Over the coming months, I’ll be consulting with my colleagues, and talking to those outside government like you all, on the most appropriate form for AI-related safeguards and regulation.

This is a complex area. 

If we can address issues of trust, we can increase adoption of AI by Australian businesses, and harness the productivity gains.

This Government is committed to boosting adoption of AI and automation technologies in Australia.

Part of this is investing in people. 

In the skills they need to work with and integrate AI technologies.

Part of this is investing in new ideas – 

Enabling Australian businesses to take risks and develop solutions that work for Australian communities. 

And part of this is shoring up trust. 

Making sure people and businesses feel like they can rely on AI.

These are challenges we will continue to work on, together.

I know all of you here have contributions to make.

Thank you Mohit for inviting me to speak, and thank you everyone for your attention.