Digital platforms are increasingly shaping what people read, watch and believe online, and a group of University of Canberra researchers argues that Australians are paying the price for not knowing how those systems work. Writing in an article republished from The Conversation, the academics say algorithm-driven feeds, search results and AI summaries are making editorial decisions that are hidden from users and difficult to challenge, while weakening the reach and financial footing of public-interest journalism.
Their warning lands at a moment when confidence in online information is already fragile. ABC News reported in February that a recent study found more than half of Australians think AI location-tracking tools are the most common misuse of artificial intelligence in the country, while large numbers also fear deepfake videos and impersonation scams. The researchers behind the new piece say those anxieties are being worsened by the rise of low-quality AI-generated material and by the growing use of “zero-click” search results, which present answers directly rather than sending readers to news sites.
The concern is not only that misinformation spreads faster, but that people are losing the means to judge what is credible. The Conversation article says Australians have low confidence in their ability to verify online content, and that many are now opting out of news altogether because the information environment feels overwhelming. That dynamic, the authors argue, gives opaque platforms even more power to decide which stories are amplified and which are effectively buried.
Calls for better safeguards are also coming from government and fact-checkers. The Australian Government has been promoting clearer labelling for AI-generated content and has highlighted existing complaints schemes and new laws dealing with deepfake abuse. Separately, AAP’s fact-check resource on AI visual disinformation advises users to look for labels, check whether images or videos have been debunked, and remain cautious because platform warnings are not always present or reliable. Researchers have also shown how easily AI can be used to manufacture convincing health disinformation, including fake material on vaccines and vaping.
Against that backdrop, the University of Canberra group says Australia needs a more transparent and better regulated information system. Their proposed priorities include clearer disclosure from tech platforms about how content is ranked, stronger rules around the use of news by AI companies, broader media and AI literacy, more stable funding for journalism and better training for digital-first creators. The authors argue that without such changes, invisible algorithmic systems will continue to determine the public’s view of the world, with serious consequences for trust, democracy and the survival of independent journalism.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [1]
- Paragraph 2: [2], [1]
- Paragraph 3: [1], [2]
- Paragraph 4: [5], [3], [4]
- Paragraph 5: [1]
Source: Noah Wire Services