In an unexpected twist, a Queensland government department has sparked debate over its decision to keep the use of Artificial Intelligence (AI) under wraps when crafting social media content. This move raises crucial questions about transparency, especially as AI technology continues to evolve at a rapid pace.
Recently, Fisheries Queensland shared a peculiar image on their social media channels, depicting a floating fishing rod alongside nonsensical text. The accompanying message cautioned followers against placing trust in AI-generated information, proclaiming, "don’t trust AI for your fishing rules." This irony did not go unnoticed by Tama Leaver, a professor specializing in Internet studies at Curtin University. He pointed out the amusing contradiction in creating a post warning about the unreliability of AI while using AI to produce it.
Investigations by the ABC revealed that at least four images from Fisheries Queensland's Instagram and Facebook accounts were generated through AI image tools toward the end of the previous year. Normally, this department engages with an audience of over 143,000 followers by sharing vital information about safeguarding the state's fisheries resources.
The AI-generated posts covered various topics, including details on infringement notices, patrol operations, and recent court rulings. Interestingly, two of these images tested positive for Google's AI identification tool, Synth ID, indicating they bore the invisible watermark associated with AI generation. Additionally, other visuals exhibited clear indicators that suggested they were created by a different AI program. However, none of the posts explicitly mentioned that they had been produced using AI technology, either in the captions or alternative texts.
Leaver, who also serves as a chief investigator at the ARC Centre of Excellence for the Digital Child, emphasized the growing necessity for governmental and public institutions to openly acknowledge their use of AI. He highlighted that as technology advances, it is becoming increasingly challenging to discern when AI has been employed. "Creating cartoonish, representational, and even photorealistic images is now trivially easy," he noted.
In an ideal world, he argued, transparency should be the norm whenever AI is utilized to generate content, although we have yet to reach a point where such disclosure is mandated by law.
A representative for the Department of Primary Industries, responsible for managing the Fisheries Queensland social media presence, confirmed that AI was indeed used to create the imagery for their platforms. They stated that this approach was taken for illustrative purposes, particularly in scenarios where real images could not be used due to privacy, legal, or operational constraints. Furthermore, the spokesperson asserted, "We have not received any concerns suggesting that the images used on our social media channels were unclear or mistaken for real imagery, rather than AI-generated illustrations."
Queensland government guidelines on generative AI highlight the potential benefits of this technology in enhancing productivity. However, they also recommend that employees ensure that any AI-generated content is "clearly" identified as such.
Moreover, during the 2024 state election, the newly elected LNP government circulated a deepfake video featuring the Labor leader dancing on TikTok, which they defended as being "clearly labelled as being created with AI."
Marketing expert Paul Harrison from Deakin University remarked that an increasing number of government agencies across Australia are turning to AI for their social media strategies. This trend stems from the pressure organizations face to improve efficiency in all sectors. Harrison noted that the public has a reasonable expectation for these agencies to act transparently and responsibly. He further commented that the AI-generated images on Fisheries Queensland's social media accounts were "obviously generated by AI" and that there should be an acknowledgment of this fact.
Interestingly, Harrison pointed out that once people realize something was made by AI, their reactions tend to skew negatively. He stressed that failing to disclose the use of AI can lead to another set of issues, where people might wonder why the information was withheld.
"If you’re hesitant to disclose it, perhaps you should reflect on why that is," he suggested. From a marketing standpoint, he expressed doubts about the effectiveness of AI-generated images, questioning whether this was truly the best way to engage audiences.
"In my view, this approach seems more lazy than anything else," he concluded.
This situation raises critical questions: Should public agencies be more upfront about their use of AI? What are the implications of AI-generated content in public communications? We invite you to share your thoughts in the comments below.