.png)
Authored by Andrew Truswell and Casper Xiao.
The rapid advancement of Artificial Intelligence (AI) has raised concerns about its potential negative impacts on society and the economy. As demands for AI regulation increase, it is crucial to understand and address the risks associated with this technology. In this thought leadership article, Andrew Truswell from Biztech Lawyers explores the pressing need for regulation, provides insights into AI definitions and risks, discusses existing legislation's handling of AI risks, highlights international developments in AI regulation, and emphasizes the importance of safe and responsible AI practices.
We ensure companies developing or using AI stay ahead of the curve, legally and ethically. From data use and privacy to IP, compliance, and emerging regulations, our AI law experts are here to help.
AI’s benefits are evident, but the growing demand for regulation is unprecedented. The concern arises from the potential harm to our social and economic wellbeing when it produces fake or misleading outputs. To address these concerns, the Australian Government has released a Discussion Paper which lifts the lid on these concerns, examining regulatory approaches within Australia, and comparing them to international advancements in other jurisdictions.
While a universally agreed-upon definition of AI does not currently exist, the Discussion Paper offers helpful definitions to enhance understanding. This includes the term Generative AI models, which is used to describe models “which generate novel content, such as text, images, audio and code in response to prompts.”
The European Parliament’s Draft AI Act defines broadly AI as:
“a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments”.
The Discussion Paper identifies various opportunities and challenges associated with AI models. Notably, the production of fake outputs, including manipulative deepfakes, emerges as a grave concern. Additionally, the risks and challenges of misinformation, disinformation, and outputs that incite self-harm highlight the real and relevant issues surrounding AI.
The complexity is further amplified by the inclusion of entirely incorrect or wrong outputs (known as “hallucinations”) stemming from Generative AI. Embracing a risk-based approach becomes imperative to navigating the treacherous territories, and responsibly and securely deploying AI models.
The Discussion Paper highlights AI Risks in key industries, such as in financial services, airline safety, motor vehicles and food. As these industries are already subject to regulation, AI-specific regulations must be tailored to address the gaps. The Paper also acknowledges potential overlaps with proposed changes to the Australian Privacy Act and existing legal remedies for consumers under the Australian Consumer Law. Ethical standards outlined in Australia’s AI Ethics Framework released in 2019 are also referenced.
However, there are industries where existing domestic governance lacks adequate coverage, requiring the introduction of additional AI regulations for safe and ethical AI usage.
After setting the global standard for data protection with GDPR, which applies extraterritorially to Australian businesses who process data (either as controller or processor), or offer services in the EEA, the European Parliament passed a compromise text of the AI Act, at the committee stage in June. If this law is passed, it will categorise AI in several risk categories, and ban those placed in the most harmful category (being systems deemed to pose an unacceptable risk, such as social scoring systems that conduct remote surveillance on people in real time in public spaces). The law may not pass until 2025, but it could become a global standard for AI, similar to GDPR.
The Discussion Paper acknowledges these developments and highlights other initiatives worldwide, such as the EU Digital Services Act (DSA) (Nov 2022), which applies to digital services that connect consumers to goods, services or content, creating obligations for online platforms to reduce harm, and counter online risks.
The United States continues to take a fragmented approach to AI regulation. There is currently no comprehensive federal AI statute. The only standalone federal AI-related law enacted to date is the TAKE IT DOWN Act (May 2025), which targets non-consensual intimate imagery and deepfake abuse.
AI regulation in the US remains largely state-driven, with jurisdictions such as Colorado and California advancing high-risk AI and automated decision-making rules. Federal agencies continue to rely on existing authorities, including consumer protection, competition and sector-specific powers, rather than a single unified AI framework.
Since this article was first published, Australia’s AI policy direction has materially shifted. In December 2025, the Government confirmed it would not introduce standalone AI legislation or mandatory AI guardrails. Instead, AI is regulated through existing legal frameworks — including the Privacy Act 1988 (Cth), Australian Consumer Law, anti-discrimination law and sector-specific regimes, supported by the voluntary Guidance for AI Adoption (October 2025).
The Government is also establishing the Australian AI Safety Institute (AISI) in 2026, backed by AUD 29.9 million in funding, to support safe and responsible AI deployment.
Australian businesses must understand the current regulatory framework and anticipate future regulations based on existing laws and future proposed laws and regulations, on both a domestic and international front. Given the rapid growth of AI technology impacting personal information, businesses must navigate their obligations meticulously. The Discussion Papers on AI and Privacy are complex, and compliance with the forthcoming regulations will require heightened attention to meet the expectations of Australians and regulatory authorities.
In conclusion, the Discussion Papers on AI and Privacy have emphasized an urgent need for AI regulation due to its potential risks. By understanding the nature of AI, adopting a risk-based approach, addressing gaps in existing legislation, and keeping abreast of international developments, businesses can navigate the legal landscape surrounding AI. It is crucial to prioritize safe and responsible AI practices to protect individuals and uphold ethical standards. Biztech Lawyers stands ready to assist businesses in navigating the legal challenges posed by AI.
Biztech Lawyers is an agile law firm comprising technology and data law experts who closely monitor the regulatory landscape across Australia, the UK, and the USA. Our expertise allows us to navigate the legal frontier of AI, ensuring businesses comply with evolving regulations and industry standards.
In need of legal support from a tech lawyer? Biztech Lawyers is a multi-award-winning law firm, known for fuelling and protecting tech innovation worldwide. Get in touch now to see how we can help.



International law firm Biztech Lawyers elevates clients, providing vision and confidence to navigate global markets and seize opportunities.
Whether you’re looking for advice in a particular jurisdiction or exploring how we can help expand your business, discover more below.