Ausmed's approach to responsible AI

December 2025


At Ausmed, we use artificial intelligence to improve healthcare education while protecting safety, trust, and professional integrity.

Healthcare learning is a high-stakes environment. For that reason, every AI-enabled feature we release is governed by a formal Generative AI Governance Framework that embeds safety, ethics, and quality controls into how our products are designed, built, and operated.

This page provides a concise summary of our approach. It links to our full Generative AI Governance Framework for those who want detailed technical and governance information.

Governance by design

Ausmed applies a governance-by-design approach to artificial intelligence.

Safeguards are built into every stage of the AI lifecycle, including model selection, prompt design, evaluation, deployment, monitoring, and continuous improvement. Governance is embedded from the outset and is not treated as a one-off compliance activity.

AI governance is integrated into Ausmed's enterprise-wide governance model, with defined accountabilities, escalation pathways, and oversight at executive and Board level. Responsibility for AI use sits with people, not systems.

A risk-based approach

Not all AI use carries the same level of risk.

Ausmed applies a risk-based approach that focuses on identifying and mitigating known and foreseeable risks before AI-enabled capabilities are made available to customers. Risks such as inaccurate outputs, bias, hallucinations, privacy breaches, or erosion of learner trust are assessed using structured evaluations, stress testing, and documented risk matrices.

Higher-risk use cases require stronger controls, senior leadership review, and where appropriate, Board oversight.

Human oversight is mandatory

AI does not operate independently in Ausmed products.

Where AI is used to assist educational content or learning experiences, qualified subject-matter experts review outputs for accuracy, safety, bias, and appropriateness. This human-in-the-loop oversight is particularly important where AI outputs could influence clinical understanding, learner perception, or professional development.

Human oversight continues after release through monitoring, audits, and learner feedback loops. AI use is continuously reviewed rather than considered complete at launch.

Quality, evaluation, and continuous improvement

Ausmed operates a formal Quality Management System for AI that spans:

  • Quality assurance through proactive model selection, prompt design, and pre-release evaluation
  • Quality control through post-deployment monitoring, hallucination detection, and safety evaluations
  • Quality improvement through iterative refinement based on performance data, audits, and learner feedback

These controls ensure AI systems perform reliably, safely, and as intended over time.

TRUSTED governance principles

Ausmed's approach to AI is guided by our TRUSTED governance principles, which apply across the organisation and directly to our use of generative AI.

  • Transparency: Being open and accountable about how AI is used and how decisions are made
  • Reliability: Ensuring consistent, safe, and monitored AI performance
  • User-centric: Designing AI-enabled experiences around the needs and diversity of learners
  • Stewardship: Taking long-term responsibility for the impacts of AI use
  • Timely: Applying proportionate governance that supports effective decision-making
  • Ethics Compliance: Upholding legal, regulatory, and moral obligations in AI use
  • Data-Driven: Using evidence, evaluation, and feedback to guide oversight and improvement

Transparency and disclosure

Transparency is a core requirement of responsible AI use.

We are open in sharing that generative AI is used to assist in content creation and improving learner experiences. First and foremost, all processes are enabled by human in the loop, and reviewed by qualified professionals and subject matter experts.

Ausmed supports clear disclosure and designs systems so AI-enabled decisions and outputs can be understood, reviewed, and challenged when required.

Standards and alignment

Ausmed's Generative AI Governance Framework aligns with recognised national and international standards, including Australia's AI Ethics Principles and the Voluntary AI Safety Standard.

These standards are embedded into Ausmed's systems and processes, supporting regulatory readiness, long-term resilience, and trust with learners, partners, and regulators.

Our commitment

We will continue to use AI where it delivers genuine educational value and improves learning outcomes. We will limit or avoid AI use where the risks outweigh the benefits.

Responsible AI use at Ausmed is not about speed or novelty. It is about delivering safe, high-quality learning experiences that healthcare professionals can trust.

References:

Council of Deans of Nursing and Midwifery (CDNM). (2025, October). Position statement on the use of artificial intelligence in nursing and midwifery education, research and practice. CDNM.

CSIRO. (n.d.). Diversity and inclusion in AI guidelines. CSIRO.

Department of Finance, Australia. (2024). Implementing Australia's AI ethics principles in government. Australian Government.

Department of Industry, Science and Resources, Australia. (2024). Voluntary AI safety standard. Australian Government.

National Institute of Standards and Technology (NIST). (2024). The NIST AI risk management framework (AI RMF 1.0). U.S. Department of Commerce.

Office of the Australian Information Commissioner (OAIC). (2023). Developing and training generative AI models: Privacy guidance. Australian Government.

Organisation for Economic Co-operation and Development (OECD). (2019). Recommendation of the Council on artificial intelligence. OECD Legal Instruments. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

Ausmed Education Pty Ltd 2025