30 April 2025
The Hon. R.A. SIMMS (17:48): I rise to speak in favour of this bill on behalf of the Greens, and in so doing I acknowledge the leadership of the Hon. Connie Bonaros. She is very passionate about this area and has been pushing the parliament to deal with this. We are in a situation where technology has developed at a pace that has been out of step with legislation, and legislators like the Hon. Connie Bonaros have played a very important role in making sure that we pause and take note of those advances in technology and ensure that vulnerable people, in particular children, are not falling mercy to this technology. I thank her for her leadership in this space.
Artificial intelligence has enormous potential benefits, but it also has the potential to harm society, the economy and our personal lives. Artificial intelligence technology has crossed a threshold with the capability to make people look and sound like other people. A deepfake is fabricated, hyper-realistic digital media, including video, image and audio content. Not only has this technology created confusion, scepticism and the spread of misinformation—and we have certainly seen this particularly in other jurisdictions in the context of election campaigns—but deepfakes also pose a threat to privacy, security and psychological wellbeing.
Manipulation of images is not new, but over recent decades, digital recording and editing techniques have made it far easier to produce fake visual and audio content not just of humans but of animals, machines and even inanimate objects. Advances in artificial intelligence (AI) and machine learning have taken the technology even further, allowing it to rapidly generate content that is extremely realistic, almost impossible to detect with the naked eye and very difficult to debunk. This is why the resulting photos, videos and sound files are called deepfakes.
To generate convincing content, deepfake technology often requires only a small amount of genuine data, images, footage or sound recordings. Indeed, the field is evolving so rapidly that deepfake content can be generated without the need for any human supervision at all. The possibilities for misuse of this technology are growing exponentially as digital distribution platforms become more publicly accessible and the tools to create deepfakes become relatively cheap, user friendly and mainstream.
Deepfakes have the potential to cause significant damage. They have been used to create fake news, false pornographic videos and malicious hoaxes usually targeting well-known people such as politicians and celebrities. Potentially, deepfakes can be used as a tool for identity theft, extortion, sexual exploitation, reputational damage, ridicule, intimidation and harassment. Any person who is targeted by such efforts may experience financial loss, damage to their professional or social standing, fear, humiliation, shame, a loss of self esteem or reduced confidence.
Reports of misrepresentation and deception could undermine trust in digital platforms and services and increase general levels of fear and suspicion within our society. As advances in deepfake technology gather pace and apps and tools are emerging that allow the general public to produce credible deepfakes, concerns are growing about the potential for harm to both individuals and society.
As noted in eSafety Commissioner Julie Inman Grant's opening statement to a Senate standing committee inquiring into the Criminal Code Amendment (Deepfake Sexual Material) Bill of last year:
Deepfake detection tools are lagging behind the technology itself. Open-source AI apps have proliferated online and are often free and easy to use to create damaging digital content including deepfake image-based abuse material and hyper-realistic synthetic child sexual abuse material. Companies [should] be doing more to reduce the risks that their platforms can be used to generate damaging content.
However, using deepfakes to target and abuse others is not simply a technology problem. It is a result of social, cultural and behavioural issues that are being played out in the online space. As noted by the Australian Strategic Policy Institute's report, 'Weaponized deep fakes', there are challenges to security and democracy represented by deepfakes. These include heightened potential for fraud, propaganda and disinformation, military deception and even the erosion of trust in our institutions and fair election processes.
The risks of deploying a technology without first assessing and addressing the potential for individual and societal impacts are very high. Deepfakes provide yet another example of the importance of safety by design to assist in anticipating and engineering out misuse at the get-go. It is very clear that AI technology has rapidly outpaced government regulation. Digital rights are essential for a fair and just society. People deserve control over their data, transparency and automated decision-making and robust protections against misuse, including from the harmful practice of creating, distributing and threatening to distribute artificially generated images.
As I said from the outset, the Greens appreciate the work of the Hon. Connie Bonaros in this space. This is an important reform, I think, in terms of moving us more towards a society that strikes a better balance between technology and the rights of all members of our society to live free from harm. The Greens support the bill.