The BGA Australia Team, led by Managing Director Michael “Mick” McNeill, wrote an update to clients on the Australian government’s approach to artificial intelligence (AI) regulation.

Context

  • The Australian government’s regulation of AI will focus on the technology’s use in high-risk settings by establishing mandatory guardrails. Canberra will immediately work with industry to develop a voluntary AI Safety Standard and develop options for the voluntary labeling and watermarking of AI-generated materials.
  • The government’s approach is outlined in its interim response to the Safe and Responsible AI in Australia discussion paper. Submissions to the discussion paper highlighted risks associated with new, powerful AI models. There was consensus that voluntary guardrails are insufficient but that mandatory guardrails should only apply to high-risk applications of AI. Submissions emphasized that any regulatory response should be interoperable with international approaches.

Significance

  • The government will immediately work with industry to develop a voluntary AI Safety Standard and options for the voluntary labeling and watermarking of AI-generated materials in high-risk settings. The National AI Center will work with industry to develop the AI Safety Standard, and the Department of Industry, Science and Resources will work with industry on watermarking.
  • The government is already addressing harms that AI poses in other areas. This includes reforming privacy law, reviewing the Online Safety Act, drafting laws related to misinformation and disinformation and releasing a cybersecurity strategy.

Implications

  • The government acknowledges that there are diverse views on what constitutes “high-risk” AI and that further work is needed to define the criteria of risk categorization. Industry Minister Ed Husic said the starting point for assessing high risk would be “anything that affects the safety of people’s lives or someone’s future prospects in work or with the law.”
  • Australia’s response has been informed by developments in the European Union, United States and Canada, and the government intends to deepen international cooperation on setting AI standards. The government acknowledges that while the adoption of AI and automation could substantially boost Australia’s productivity, there is low public trust that AI systems are being developed, deployed and used safely and responsibly.

We will continue to keep you updated on developments in Australia as they occur. If you have any questions or comments, please contact BGA Australia Managing Director Michael “Mick” McNeill at mmcneill@bowergroupasia.com.

Best regards,

BGA Australia Team