Unit rationale, description and aim

Artificial Intelligence (AI) now influences decisions that shape people’s opportunities, organisational strategy, and societal outcomes. While AI enables powerful advances in prediction, optimisation, and innovation, it also introduces complex ethical, cultural, and governance challenges. Hidden biases, opaque decision pathways, privacy risks, and unequal access to AI technologies can undermine fairness, accountability, wellbeing, and public trust. Addressing these issues requires responsible, transparent, and culturally aware approaches to the design, development, and deployment of AI systems. 

This unit responds to the need for professionals who can design, govern, and evaluate AI responsibly. Taking a human-centred and interdisciplinary perspective, it draws on insights from computer science, psychology, ethics, social science, and design thinking to ensure that AI serves human values, dignity, and the common good. Ethics and human-centred thinking are positioned as core foundations for identifying societal risks, which in turn guide the development of fairness, accountability, and wellbeing-related measures. Students develop a fit-for-purpose design mindset grounded in recognised industry standards, ethical frameworks, and governance principles that promote trust, fairness, accountability, and cultural safety. 

The unit examines how communication and information flows shape the design, fit-for-purpose use, and governance of AI technologies, enabling students to understand how effective AI solutions are developed and applied in organisational contexts. Students also build a strong understanding of the AI lifecycle including design, development, testing, deployment, and post-deployment monitoring with particular emphasis on the testing stage, where societal, cultural, and ethical risks must be identified and mitigated before deployment. Through real-world case studies across domains such as healthcare, education, and public safety, students explore how ethical, culturally aware, and human-centred principles can be embedded throughout the AI development process.

The aim of this unit is to develop graduates who can design, assess, and advocate for AI systems that are trustworthy, fair, accountable, and aligned with ACU’s mission of ethical innovation for the common good. 

2026 10

Campus offering

No unit offerings are currently available for this unit.

Prerequisites

Nil

Learning outcomes

To successfully complete this unit you will be able to demonstrate you have achieved the learning outcomes (LO) detailed in the below table.

Each outcome is informed by a number of graduate capabilities (GC) to ensure your work in this, and every unit, is part of a larger goal of graduating from ACU with the attributes of insight, empathy, imagination and impact.

Explore the graduate capabilities.

Critically analyse the ethical, social, and psycho...

Learning Outcome 01

Critically analyse the ethical, social, and psychological implications of AI design and deployment across diverse contexts.
Relevant Graduate Capabilities: GC1, GC6, GC7, GC9

Evaluate and apply principles of fairness, account...

Learning Outcome 02

Evaluate and apply principles of fairness, accountability, transparency, and explainability to assess and mitigate algorithmic bias.
Relevant Graduate Capabilities: GC2, GC6, GC7, GC8

Design and propose human-centred AI solutions that...

Learning Outcome 03

Design and propose human-centred AI solutions that integrate interdisciplinary perspectives from ethics, psychology, and design thinking.
Relevant Graduate Capabilities: GC2, GC6, GC7, GC8, GC10

Apply governance and industry standards, incorpora...

Learning Outcome 04

Apply governance and industry standards, incorporating Indigenous data sovereignty and cultural safety, to ensure accountable and responsible AI development and use.
Relevant Graduate Capabilities: GC2, GC5, GC6, GC7

Communicate and reflect on strategies for developi...

Learning Outcome 05

Communicate and reflect on strategies for developing trustworthy and inclusive AI systems that promote human well-being and social good.
Relevant Graduate Capabilities: GC3, GC6, GC11, GC12

Content

Topics will include:

  • Introduction to Human-Centred and Responsible AI (including Indigenous data sovereignty and cultural safety)
  • Human Values, Cultural Perspectives, and Societal Impact
  • Fairness, Bias, and Culturally Aware Evaluation in AI
  • Explainability, Transparency, and Ethical Communication
  • Human-Centred and Culturally Inclusive Design for AI
  • Governance, Policy, and Indigenous Principles of Data Stewardship
  • Trust, Safety, and AI for Social and Community Good
  • Future Directions, Reflective Practice, and Working Respectfully with Indigenous Communities

Assessment strategy and rationale

The assessments are designed to build progressively from guided exploration to independent application, enabling students to develop the analytical, ethical, and practical capabilities required for responsible AI-driven decision-making. Each task provides structured opportunities for feedback, reflection, and refinement, ensuring students can demonstrate real-world application of the concepts explored in the unit.

Assessment 1 (Interactive Decision Lab) introduces students to core ideas in decision intelligence and reinforcement learning through guided experimentation. The short reflective component helps students identify early ethical, social, and cultural implications arising from algorithmic decision processes.

Assessment 2 (Applied Project – AI-Driven Decision Prototype) deepens capability by requiring students to design and implement a fit-for-purpose AI decision system. Students apply principles of responsible and explainable AI, considering governance requirements, fairness, societal risks, and appropriate use within organisational contexts.

Assessment 3 (Reflective Portfolio) consolidates learning by prompting students to critically reflect on their design choices, ethical reasoning, information flows, and human-centred methods—including cultural safety and Indigenous data considerations—and how these shaped the development and evaluation of their prototype.

Together, these assessments create a coherent, practice-focused learning cycle that develops technical skill, ethical judgment, and professional reflective practice. 

To pass the unit, students must achieve all learning outcomes and an overall grade of 50% or higher.

Overview of assessments

Assessment Task 1: Interactive Decision Lab - Mod...

Assessment Task 1: Interactive Decision Lab - Modelling and Simulation

Students complete a guided simulation exploring key concepts in decision intelligence and algorithmic decision-making, submitting a short technical notebook and reflection that examines ethical, social, and cultural risks. The purpose of this task is to build foundational understanding of automated decision processes and develop early capability in recognizing societal and cultural considerations essential for responsible, fit-for-purpose AI design.

Weighting

20%

Learning Outcomes LO1, LO2, LO4, LO5
Graduate Capabilities GC1, GC2, GC3, GC7, GC8

Assessment Task 2: Applied Project Human-Centred&...

Assessment Task 2: Applied Project Human-Centred AI Prototype

Students design and implement an AI-driven decision system applying responsible, explainable, and inclusive AI principles. The professional report critically evaluates system performance, governance, and the social, cultural, and Indigenous data considerations relevant to its use in real-world contexts.

Weighting

50%

Learning Outcomes LO1, LO2, LO3, LO4, LO5
Graduate Capabilities GC1, GC2, GC5, GC6, GC7, GC8, GC10

Assessment Task 3: Reflective Portfolio – Ethical...

Assessment Task 3: Reflective Portfolio – Ethical and Professional Reflection 

Students critically evaluate their design and implementation process, articulating how ethical frameworks, governance principles, and cultural awareness informed their work. They propose strategies for trustworthy, inclusive, and human-centred AI practice. 

Weighting

30%

Learning Outcomes LO3, LO4, LO5
Graduate Capabilities GC3, GC6, GC7, GC9, GC11

Learning and teaching strategy and rationale

This unit integrates advanced theoretical understanding with applied professional practice to equip students with the capabilities required for responsible and culturally informed AI development. Students engage with interactive learning materials, scholarly readings, guided discussions, and complex real-world case studies to investigate the ethical, human-centred, cultural, and governance implications of AI. Learning activities are designed to cultivate higher-order skills in analysis, ethical reasoning, reflective judgment, and responsible design, encouraging students to examine how values, power, and organisational systems shape technological outcomes. Through problem-based tasks and applied design exercises, students translate conceptual frameworks into context-sensitive decision-making, demonstrating the capacity to evaluate risks, justify design choices, and apply relevant standards and governance principles. Formative feedback supports ongoing refinement of professional judgment and self-directed learning. Assessments are scaffolded to develop increasing levels of complexity across analysis, evaluation, communication, and design, ensuring clear alignment between learning outcomes, learning activities, and the expectations of postgraduate professional practice.

Representative texts and references

Representative texts and references

Akbarighatar. P. (2022). Maturity and Readiness Models for Responsible Artificial Intelligence (RAI): A systematic literature review. AIS eLibrary.

Crawford. K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. 

Dignum, V. (2019). Responsible Artificial Intelligence: How to develop and Use AI in a Responsible Way. Springer. 

Dignum, V. (2023). Responsible Artificial Intelligence---From Principles to Practice: A Keynote at TheWebConf 2022. In ACM SIGIR Forum (Vol. 56, No. 1, pp. 1-6). New York, NY, USA: ACM

Floridi, L. (2023). The Ethics of Artificial Intelligence. Oxford University Press.

Jobin, A., Lenca, M., & Vayena E. (2019). The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

McGregor, S., Paeth, K., & Lam K. (2022). Indexing AI Risks with Incidents, Issues, and Variants. arXiv+2oecd.ai+2

Mitchell, M. (2020). Artificial Intelligence: A Guide for Thinking Humans. Penguin.

United Nations Educational, Scientific and Cultural Organization. (2021). Recommendation on the Ethics of Artificial Intelligence. UNESCO. 

United Nations Educational, Scientific and Cultural Organization. (2021). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). UNESCO. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206


Frameworks & Tools

  • AI Fairness 360 Toolkit - ai-fairness-360.org+1 
  • Evolving Human-Centred Design: Part 1 - Microsoft Design.
  • HAX Workbook - Microsoft
  • Introduction to machine learning operations (MLOps) - Training | Microsoft Learn
  • Model Card Toolkit - Guide page TensorFlow+1
  • Transformers (Hugging Face) - Documentation Hugging Face+1
  • TensorFlow Responsible AI Toolkit - Resources TensorFlow+1
  • OECD AI Principles - OECD Principles page OECD
  • UNESCO Ethical AI Framework - Recommendation page unesco.org
Locations
Credit points
Year

Have a question?

We're available 9am–5pm AEDT,
Monday to Friday

If you’ve got a question, our AskACU team has you covered. You can search FAQs, text us, email, live chat, call – whatever works for you.

Live chat with us now

Chat to our team for real-time
answers to your questions.

Launch live chat

Visit our FAQs page

Find answers to some commonly
asked questions.

See our FAQs