AI in healthcare is an ever evolving topic. Learn more about the basics, and keep-up-to-date with new developments and policies.
Austin staff should be familiar with the Austin Health's Artificial Intelligence Policy pdf (Austin staff access only).
Keep up with the latest developments by checking this page regularly for the latest news and research. Check our suggested podcasts, blogs and sites and please remember to contact the Library if you have any suggestions or edits for this guide. We appreciate your feedback and input.
Open Conference of AI Agents for Science 2025, 22 October 2025
The 1st open conference where AI serves as both primary authors and reviewers of research papers.
Expert reactions from Australian researchers to the Open Conference of AI Agents for Science, 20 October 2025
Sets out 6 essential practices for responsible AI governance and adoption, based on national and international ethics principles. By following this guidance, organisations will:
There are 2 versions of the guidance:
and further resources.
Guest Post — From Language Barrier to AI Bias: The Non-Native Speaker’s Dilemma in Scientific Publishing, The Scholarly kitchen, 20 October 2025
For decades, EAL researchers have faced systemic disadvantages in publishing. Now, AI writing tools such as Grammarly, Paperpal, Perplexity, Claude, or ChatGPT promise relief of this linguistic burden. Yet, they bring new risks into science. They promise seamless language polishing, yet also carry the potential to blur our voice, standardize our style, and insert new biases.
AI ethics for the everyday intensivist, Critical Care and Resuscitation, Vol 27 (2), June 2025
In Australian intensive care units (ICUs), Artificial Intelligence (AI) promises to enhance efficiency and improve patient outcomes. However, ethical concerns surrounding AI must be addressed before widespread adoption. We examine the ethical challenges of of AI using the framework of the four pillars of biomedical ethics—beneficence, nonmaleficence, autonomy, and justice, and discuss the need for a fifth pillar of explicability.
AI & human behaviour: Augment, adopt, align, adapt, Behavioural Insights Team 2025, 3 October 2025
The rise of generative AI has triggered an explosion of attention, spending, and organisational change. Yet the drive for economic and technological progress has largely neglected a crucial factor: human behaviour. The promise of AI can only be fulfilled by understanding how and why people think and act the way they do. At the same time, human behaviour is central to avoiding the potential pitfalls that many see ahead and, importantly, harnessing the opportunities.
Guest Post – Taxonomy of Delegation: How GAIDeT Reframes AI Transparency in Science, an Interview with Yana Suchikova, The scholarly kitchen, 30 September 2025
Today’s AI disclosures fail in three ways: they are vague (“used ChatGPT for editing”), non-actionable (editors cannot tell what was done or why it matters), and they stigmatize honest authors. Instead of clarity, we get euphemisms; instead of routine practice, we get fear of judgment. Shifting from “detection” to clear, comparable declarations of generative AI contributions is a chance to restore the focus on scientific validity and accountability.
AI systems can easily lie and deceive us – a fact researchers are painfully aware of, The Conversation, 26 September 2025
What AI thinks in private
Some advanced AI systems, called reasoning models, are trained to generate a “thinking process” before giving their final answer.
In the above experiments, researchers lied to these models that their “thoughts” were private. As a result, the models sometimes revealed harmful intentions in their reasoning steps. This suggests they don’t accidentally choose harmful behaviours.
These “thinking” steps also revealed how AI models sometimes try to deceive us and appear aligned while secretly pursuing hidden goals.
From ideas to reality an introduction to generating and implementing innovation in health systems
A concise walkthrough on how innovation and its implementation can be understood, what we need to do to get the innovations necessary to address the needs of our populations, how we can make the best use of the innovations we have, and how we can transform our health systems to ensure we are equipped to keep learning from the ground up and innovating to meet new challenges.
Maintaining Safety and Trust When Patients Engage Google: A Conversation With Dr Michael Howell
Roy Perlis, Editor-in-Chief of JAMA+ AI, talks with Dr. Michael Howell, Chief Clinical Officer at Google about how he aims to balance innovation and safety in AI-driven medicine. They cover the challenges and consequences of summarizing complex medical information, how Howell's background working on quality and safety in health care shapes what he does now, and how AI might one day tap clinicians on the shoulder before they make mistakes. 18 September 2025
How we tricked AI chatbots into creating misinformation, despite ‘safety’ measures, The Conversation, 1 September 2025
Healthdirect releases Virtual Health Service Emissions Measurement a new tool to accurately measure and report on emissions generated through virtual health services, 16 September 2025
Educational Strategies for Clinical Supervision of Artificial Intelligence Use, The New England Journal of Medicine, Aug 20, 2025
Pragmatic Artificial Intelligence (AI) guidance for clinicians
New releases by the Australian Commission on Safety and Quality in Health Care to support clinicians in the day-to-day use of AI tools.
A framework to provide a structured approach to AI integration into nursing.
Breaking the bias: Equitable AI at the heart of design (video 56.43)
Bias in AI can lead to serious inequalities in healthcare. So, planning for fairness from the outset, by putting equity at the heart of design, is a priority for the sector.Expert solution architect Philip Stalley-Gordon will share his knowledge on ‘breaking the bias in AI’.
The AI Health Podcast
Explore the ways in which AI will transform healthcare, biotech, and medicine through conversations with entrepreneurs, investors and scientists.
NEJM AI Grand Rounds
New England Journal of Medicine Grand Rounds is a podcast that features informal conversations exploring issues at the intersection of artificial intelligence, machine learning, and medicine.
The Library space is available 24/7 to Austin Health and Mercy Heidelberg staff and students. Use your security card for access. Library staff are available Mon-Fri 8.30am-5pm except Vic. public holidays.
