Skip to main content
Speeches

Member Therese McCarthy Hockey’s remarks to AFIA Risk Summit 2024


Taking flight: navigating the new challenges posed by generative artificial intelligence

 

Good morning and thank you for inviting me to represent APRA at my first AFIA Risk Summit.

In 1930, the British economist John Maynard Keynes predicted that within 100 years technological advances would have increased productivity to the point that most people would need to work no more than 15 hours a week1. Our biggest challenge, Keynes envisioned, would be what to do with so much free time. Granted, there are another six years to go until the century is up, but on current trends Keynes’ prediction seems sadly amiss, with the 40-hour week still standard and many of us working far longer, especially now that we’re constantly reachable.

Were he still alive, the famed economist could at least console himself that he is part of a long and inglorious human tradition of making predictions about technology that turn out to be wildly wrong. The New York Times, for example, confidently predicted in 1903 that achieving manned flight would take humans anywhere between one and 10 million years – only for the Wright brothers to make their historic first flight nine weeks later!2

Against this backdrop, we need to be circumspect when considering the predictions being made about the impact of generative artificial intelligence (AI). To its proponents, advances in AI will open up a new world of possibilities to support human endeavour by cutting costs, speeding up and improving decision-making and taking over mundane tasks. To critics and sceptics, AI will cause widespread job losses, aid scammers and other criminals, and possibly far worse; last year more than 350 researchers and executives working in AI signed an open letter warning the technology risked causing human extinction.3

The availability and development of generative AI is rapidly expanding, with free programs such as ChatGPT making it accessible to anyone with internet access. Considering the potential for cost efficiencies and service improvements, it’s no surprise the business world is heavily invested in efforts to harness the technology, including the banks, insurers and superannuation funds that APRA supervises. Within APRA and across governments and regulators there is keen support for the realisation of tangible improvements through innovation. But in yielding these benefits, we want to make sure there are adequate guardrails in place to ensure the benefits of AI don’t come at an unacceptable cost to the community.

As the power of this new technology expands, the opportunities generative AI creates will amplify – but so will the potential downsides, especially if we allow it to operate more independently of human oversight and control. APRA’s message to the entities we regulate is that firm board oversight, robust technology platforms and strong risk management are essential for companies that want to begin experimenting with new ways of harnessing AI. Entities without such measures in place should only proceed with a high-level of caution – or potentially not at all.

Preparing for take off

While generative AI is a relatively recent development, less sophisticated forms of AI have been operating widely for years and even decades in some cases: think of email spam filters, internet chat bots and natural language processing. These tools have helped businesses cut costs by automating and speeding up manual or time-consuming processes and replacing some lower-skilled jobs. Anyone who’s sent a confusing text message thanks to an autocorrect error or grown frustrated trying to be understood by voice recognition software would recognise that these applications are far from flawless. But on the plus side, the risks of what we might call “regular strength” AI have been relatively manageable to date.

Generative AI, which has the ability to learn from existing artefacts and generate new, realistic content, amplifies both the risks and rewards. The potential benefits are enormous, with an Australian Government report noting that AI and automation could add an additional $170 billion to $600 billion a year to Australia’s GDP by 20304. The financial services industry, which depends heavily on ingesting and analysing vast quantities of data to effectively make predictions – who to insure and at what price, the probability that a borrower may default on a loan, which assets to invest in – is already emerging as a major investor in the new technology: 60 percent of financial-services sector respondents in McKinsey’s Global AI Survey5 reported that they had embedded at least one AI capability, while the 2024 EY Global Insurance Outlook found 52 per cent of insurance CEOs were planning significant investments in AI in the next year6.


Over recent years, APRA has observed Australian financial institutions beginning to use more advanced AI tools to boost their productivity in areas ranging from customer service and marketing to fraud detection and regulatory compliance. Some specific examples include:

  • using generative AI to rapidly review long documents against specific criteria such as policy requirements;
  • programming generative AI bots to simulate customer personas to test and improve products and services;
  • providing employees with real-time assistance to help them more efficiently support customers; and
  • using generative AI-powered code authoring tools to help developers write better code faster.


The advances promised by generative AI will ideally deliver benefits for customers and shareholders. Greater efficiency and reduced costs should – in theory at least – translate into savings for customers through lower fees or reduced insurance premiums. Higher profits boost value for investors. Technological advancements through AI could deliver faster and more personally tailored customer service or make quality financial advice available at lower cost. If a superannuation fund is able to harness AI to better predict market trends, that could mean increased returns for its members. And by freeing up people to focus on higher level tasks, as well as detecting patterns in data that might be imperceptible to humans, AI could promote better decision-making and therefore improved risk management and financial stability – something of great interest to APRA given our prudential safety mandate.

But just as the potential rewards of generative AI are bigger, so are the risks. Last month I represented APRA and Australia at the 23rd International Conference of Banking Supervisors in Basel, Switzerland, and one of the issues discussed was the potential for AI to be used to commit crimes, scams and undermine financial stability.  It became apparent through the conference that regulators globally are increasingly concerned about the potential for AI to create deepfake videos and spread convincing disinformation. Conference attendees reflected on how this could amplify banking sector vulnerabilities and risk – something that could potentially destabilise the financial system. Demonstrating the legitimacy of this threat, just days after I arrived back in Australia news broke about a social media advertisement, created with deep fake video technology, purporting to show the former head of the Australian Stock Exchange promoting an online investment advice community7.

While AI can improve business decision-making when used effectively, it could worsen decision-making and even spark a financial crisis if it malfunctions or isn’t applied appropriately. The Chair of the US Securities and Exchange Commission Gary Gensler last year warned that AI could heighten financial fragility by promoting what he termed “herding” – with individual actors “making similar decisions because they are getting the same signal from a base model or data aggregator”8. At a simpler level, flaws in the choice of AI model or design, or errors in the data it’s examining, will lead to faulty conclusions in areas as significant to prudential safety as capital requirements, liquidity and credit risk. Crucially, as AI algorithms become more complex and the systems more autonomous and opaque, detecting when, how and why the technology’s analysis is off-track will become increasingly difficult, further exacerbating these risks.

Other risks posed by more powerful AI technologies are ethical, such as the potential for algorithms to develop biases that unfairly discriminate against groups of people or exclude them from some financial services entirely. There are also well-founded privacy and legal concerns stemming from the exponential rise in data and the need to store it securely. And there are legitimate concerns the “black box” nature of generative AI could lead to unexpected outcomes that are hard or impossible to explain, feeding public perceptions that humans have lost control of their creations. That would undermine public trust, which is a risk for financial stability as well, even if a robot army doesn’t ultimately rise up to subjugate humanity.

New rules for a new era

Given the materiality of some of the threats I just outlined, you might think APRA would be looking at introducing new regulatory requirements to mitigate those risks but we currently have no such plans.
There are several reasons for this.

The first is that the Federal Government, appropriately, is taking the lead on coordinating a national approach to developing guardrails on the use of AI across all aspects of society9. Having launched a consultation on safe and responsible AI use in the middle of last year, the Government announced in January it was looking at introducing mandatory guardrails to promote the safe design, development and deployment of AI systems. These may ultimately include requirements around product testing, transparency of how models and systems operate, as well as greater accountability for those who develop and use the technology. While APRA expects to have input to the consultation, the process will cover vastly wider terrain than the banking, insurance and superannuation industries we regulate, and will also include areas of risk that sit outside our prudential mandate.

But the primary reason is that we believe our prudential framework already has adequate regulations in place to deal with generative AI for the time being. Our prudential standards may not specifically refer to AI but nor do they need to at the moment. They have intentionally been designed to be high-level, principles-based and technology neutral. Are appropriate cyber security controls in place to deal with AI-enabled threats? CPS 234 Information Security covers that. Is data protected from misuse or theft? Entities can find guidance in CPG 235 Managing Data Risk. Has the entity considered AI risks introduced by a third party? CPS 230 Operational Risk Management, which comes into effect next year, will deal with that. So while we are watching closely, we are confident for now that we have the tools to act, including formal enforcement powers, should it be necessary to intervene to preserve financial safety and protect the community.

Delivering a speech last August, I advised that APRA’s initial guidance on AI was to tread carefully when using these advanced technologies: conduct due diligence, put appropriate monitoring in place, test the board’s risk appetite and ensure there is adequate board oversight. While all of that remains applicable, I can go further today in outlining APRA’s position on regulated entities that wish to start using advanced AI models, including what “good” looks like.

Given the potential benefits for both business and customers, APRA broadly supports our regulated entities beginning to test how they can incorporate AI into their practices. We would caution, however, that not all banks, insurers and superannuation trustees are equally capable of doing so. Having monitored developments in this area over several years, our advice now is that entities with robust technology platforms and a strong track record of risk management are good candidates to experiment with AI and should feel confident proceeding. Entities that are weak in these areas should proceed with caution and care. Importantly, entities need to know in which category they sit. One example of what “good” looks like is having open and proactive discussions with APRA, so if entities are unsure where they sit, APRA will happily provide its own assessment upon request.

For those entities that are beginning to test how they can make better use of generative AI, our primary concern as a prudential regulator relates to governance, which is often challenging with fast-moving and technologically driven issues. With that in mind, we strongly advise boards to consider the following:

  • Board capability – how does the board ensure it is sufficiently capable to challenge management and make sound decisions on AI strategy and risk management? What learning and development, outside advice or skills might be needed?
  • Risk culture – how does the board ensure all employees across the three lines of defence understand their role and responsibilities in protecting the business? And how can management monitor the potential for the unauthorised use of AI by employees?
  • Data quality and reliability – the best AI in the world can’t create good output if your company hasn’t got its house in order on the inputs. Our observation across the financial services industry is that many institutions have a long way to go on data risk management generally.


The final one I will leave you with for now is “accountability”. In common with any type of outsourcing, companies cannot delegate full responsibility to an AI program. This becomes even more important when we consider that generative AI will involve automated decision-making. Entities must have, to use the industry jargon, a “human in the loop”: an actual person who is accountable for ensuring it operates as intended. This doesn’t necessarily mean human involvement in AI decisions – for example, stopping a potentially fraudulent transaction requires fast action. Instead, it is about someone being accountable for the algorithm, its sound operation, and the outcomes it delivers.

So you can see that while we are not adding to our rule book at the moment, we will be using our strong supervision approach to stay close to entities as they innovate and consider management of AI risks.

AAI (APRA artificial intelligence)

APRA is also being mindful of the above principles as we proceed on our own AI journey. Some of the reasons for our interest in this area are the same as those of the banks, insurers and superannuation trustees we oversee: to save time; to divert valuable resources to higher-level areas of need; and to assist decision-making. Unlike our regulated entities, our ultimate goal isn’t to boost profits or shareholder value, but to improve industry regulation through better targeted and more efficient and effective supervision and policymaking. In the longer term, we are hopeful this work will create benefits for industry through the development of lighter touch regulation, and less manual and more efficient compliance processes.

Several years ago, we trialled the benefits of using free text analysis of the responses to risk culture surveys APRA regularly asks entities to complete. The machine learning approach we built automatically maps the free text comments to APRA’s risk culture “10 Dimensions” and flags the sentiments in those comments. In this way, it helps our risk specialists to direct their focus where it’s most required. This initiative is now being used on an on-going basis and is enabling the specialist teams to spend a far greater proportion of their time and judgement on the comments of most interest.

Other initiatives are still in the testing phase. One that we are working on applies natural language processing to incident data from a select group of entities. The model compares text descriptions of each incident to the severity rating assigned by the entity. We are testing whether technology can indicate which severity ratings seem inconsistent with the text. We have shown that it’s possible to curate a list of outlier incidents, worthy of further investigation, which could prioritise the required human reading time and thereby reduce regulatory burden. Similarly we are examining whether this model can use the text descriptions to group incidents into themes. Again, we are testing whether the technology can save reading time and reduce human bias when creating the groups. We intend to feed these insights back to our regulated flock with the goal of uplifting industry practice and overall resilience.

We’re also undertaking some joint work with our regulatory peers. We are one of several Government agencies and departments taking part in a Government-led trial of Microsoft Copilot, with a subset of APRA employees evaluating whether they can find any efficiency gains in their area of work. Additionally, we’re collaborating with ASIC and the Reserve Bank of Australia on a proof of concept after we identified the three agencies had a lot of common challenges; specifically vast reams of documents to compare, analyse and summarise.

Fasten your seatbelts

Humans may have a poor track record at predicting the impact of technology on the future, but we are very good at creating and building on technological advances – a trait that is constantly accelerating. It took humankind roughly four thousand years after the invention of a basic cart to invent the automobile, but the gap between that historic first flight by the Wright brothers and Neil Armstrong stepping on the moon was only 63 years! So while the future impact of artificial intelligence on the world remains contested, we are likely to find out sooner than we might expect.

As regulators, we want to encourage innovation that can improve financial performance, strengthen operational resilience and deliver improved services and savings to customers and the broader community. But we also need to ensure there are adequate guardrails in place to minimise the risk of harm, whether it’s unfair discrimination against vulnerable people, threats to financial stability or bad actors exploiting the technology for commercial or political gain.

Banks, insurers, superannuation trustees – and indeed regulators – that want to explore the possibilities AI can deliver should do so, but only when they are confident that have the technological proficiency and risk management frameworks in place to manage the risks. And above all, remember that artificial intelligence can be a valuable co-pilot – but it should never be your autopilot.

Footnotes

1Economics: Whatever happened to Keynes' 15-hour working week? | Economics | The Guardian

2NYT once said airplanes would take 10 million years to develop - Big Think

3Statement on AI Risk | CAIS (safe.ai)

4Safe and responsible AI in Australia consultation: Australian Government's interim response (storage.googleapis.com)

5building-the-ai-bank-of-the-future.pdf (mckinsey.com)

6ey-2024-global-insurance-outlook-report-v2.pdf

7Deep-fake ASX video scam showing Dominic Stevens lingers on Facebook despite reporting | The Australian

8SEC.gov | “Isaac Newton to AI” Remarks before the National Press Club

9Action to help ensure AI is safe and responsible | Ministers for the Department of Industry, Science and Resources

 

Therese McCarthy Hockey, AFIA

The Australian Prudential Regulation Authority (APRA) is the prudential regulator of the financial services industry. It oversees banks, mutuals, general insurance and reinsurance companies, life insurance, private health insurers, friendly societies, and most members of the superannuation industry. APRA currently supervises institutions holding around $9 trillion in assets for Australian depositors, policyholders and superannuation fund members.