Australia: AI and Your Obligations as an Australian Financial Services Licensee

By: Daniel Knight, Ben Kneebush and Madison Jeffreys

As Artificial intelligence (AI) continues to be adopted and used by Australian Financial Services (AFS) licensees broadly, it has become increasingly evident that many licensees’ deployment of AI falls short of their existing regulatory obligations and emerging best practices.

To help, the Australian Securities and Investments Commission (ASIC) recently released REP 798 Beware the gap: Governance arrangements in the face of AI innovation (REP 798) which outlines the regulators findings from its market review into the use and adoption of AI across financial services and credit licensees. ASIC concluded that licensees are implementing AI more quickly than they are adjusting their risk and compliance frameworks to manage the heightened risks and challenges that AI adoption brings, creating a real risk of consumer harm.

ASIC has also used REP 798 to remind financial services businesses that the existing regulatory framework is technology neutral, meaning it applies equally to AI (and non-AI) systems and processes. This includes the general conduct obligations (including to act ‘efficiently, honestly and fairly’), as well as general laws in relation to misleading or deceptive conduct and unconscionable conduct.  ASIC has also highlighted the need for directors to consider the directors’ duties when taking on new AI risks.

REP 798 also includes a number of best practices for licensees looking to implement AI tools in relation to documentation, AI governance, technological and human resources, risk management systems and AI third-party providers. ASIC has also provided 11 questions for licensees to review and consider in order to ensure their AI innovation is balanced with regulatory obligations and best practices.

The Australian Government is separately consulting on broader legal changes in light of the rise of generative AI technology (and AI technologies more broadly), and have introduced a Voluntary AI Safety Standard which includes 10 voluntary guardrails for Australian companies looking to use and innovate with AI. These voluntary standards are anticipated to significantly influence the incoming mandatory guardrails for AI in high-risk settings.

Licensees who are planning to utilise AI should leverage these existing resources to ensure compliance with the current and potential future regulatory requirements.

To understand these issues in greater detail, please see our recent paper here.

Copyright © 2024, K&L Gates LLP. All Rights Reserved.