Unit rationale, description and aim
Artificial Intelligence (AI) now influences decisions that shape people’s opportunities, organisational strategy, and societal outcomes. While AI enables powerful advances in prediction, optimisation, and innovation, it also introduces complex ethical, cultural, and governance challenges. Hidden biases, opaque decision pathways, privacy risks, and unequal access to AI technologies can undermine fairness, accountability, wellbeing, and public trust. Addressing these issues requires responsible, transparent, and culturally aware approaches to the design, development, and deployment of AI systems.
This unit responds to the need for professionals who can design, govern, and evaluate AI responsibly. Taking a human-centred and interdisciplinary perspective, it draws on insights from computer science, psychology, ethics, social science, and design thinking to ensure that AI serves human values, dignity, and the common good. Ethics and human-centred thinking are positioned as core foundations for identifying societal risks, which in turn guide the development of fairness, accountability, and wellbeing-related measures. Students develop a fit-for-purpose design mindset grounded in recognised industry standards, ethical frameworks, and governance principles that promote trust, fairness, accountability, and cultural safety.
The unit examines how communication and information flows shape the design, fit-for-purpose use, and governance of AI technologies, enabling students to understand how effective AI solutions are developed and applied in organisational contexts. Students also build a strong understanding of the AI lifecycle including design, development, testing, deployment, and post-deployment monitoring with particular emphasis on the testing stage, where societal, cultural, and ethical risks must be identified and mitigated before deployment. Through real-world case studies across domains such as healthcare, education, and public safety, students explore how ethical, culturally aware, and human-centred principles can be embedded throughout the AI development process.
The aim of this unit is to develop graduates who can design, assess, and advocate for AI systems that are trustworthy, fair, accountable, and aligned with ACU’s mission of ethical innovation for the common good.
Campus offering
No unit offerings are currently available for this unit.Learning outcomes
To successfully complete this unit you will be able to demonstrate you have achieved the learning outcomes (LO) detailed in the below table.
Each outcome is informed by a number of graduate capabilities (GC) to ensure your work in this, and every unit, is part of a larger goal of graduating from ACU with the attributes of insight, empathy, imagination and impact.
Explore the graduate capabilities.
Critically analyse the ethical, social, and psycho...
Learning Outcome 01
Evaluate and apply principles of fairness, account...
Learning Outcome 02
Design and propose human-centred AI solutions that...
Learning Outcome 03
Apply governance and industry standards, incorpora...
Learning Outcome 04
Communicate and reflect on strategies for developi...
Learning Outcome 05
Content
Topics will include:
- Introduction to Human-Centred and Responsible AI (including Indigenous data sovereignty and cultural safety)
- Human Values, Cultural Perspectives, and Societal Impact
- Fairness, Bias, and Culturally Aware Evaluation in AI
- Explainability, Transparency, and Ethical Communication
- Human-Centred and Culturally Inclusive Design for AI
- Governance, Policy, and Indigenous Principles of Data Stewardship
- Trust, Safety, and AI for Social and Community Good
- Future Directions, Reflective Practice, and Working Respectfully with Indigenous Communities
Assessment strategy and rationale
The assessments are designed to build progressively from guided exploration to independent application, enabling students to develop the analytical, ethical, and practical capabilities required for responsible AI-driven decision-making. Each task provides structured opportunities for feedback, reflection, and refinement, ensuring students can demonstrate real-world application of the concepts explored in the unit.
Assessment 1 (Interactive Decision Lab) introduces students to core ideas in decision intelligence and reinforcement learning through guided experimentation. The short reflective component helps students identify early ethical, social, and cultural implications arising from algorithmic decision processes.
Assessment 2 (Applied Project – AI-Driven Decision Prototype) deepens capability by requiring students to design and implement a fit-for-purpose AI decision system. Students apply principles of responsible and explainable AI, considering governance requirements, fairness, societal risks, and appropriate use within organisational contexts.
Assessment 3 (Reflective Portfolio) consolidates learning by prompting students to critically reflect on their design choices, ethical reasoning, information flows, and human-centred methods—including cultural safety and Indigenous data considerations—and how these shaped the development and evaluation of their prototype.
Together, these assessments create a coherent, practice-focused learning cycle that develops technical skill, ethical judgment, and professional reflective practice.
To pass the unit, students must achieve all learning outcomes and an overall grade of 50% or higher.
Overview of assessments
Assessment Task 1: Interactive Decision Lab - Mod...
Assessment Task 1: Interactive Decision Lab - Modelling and Simulation
Students complete a guided simulation exploring key concepts in decision intelligence and algorithmic decision-making, submitting a short technical notebook and reflection that examines ethical, social, and cultural risks. The purpose of this task is to build foundational understanding of automated decision processes and develop early capability in recognizing societal and cultural considerations essential for responsible, fit-for-purpose AI design.
20%
Assessment Task 2: Applied Project Human-Centred&...
Assessment Task 2: Applied Project Human-Centred AI Prototype
Students design and implement an AI-driven decision system applying responsible, explainable, and inclusive AI principles. The professional report critically evaluates system performance, governance, and the social, cultural, and Indigenous data considerations relevant to its use in real-world contexts.
50%
Assessment Task 3: Reflective Portfolio – Ethical...
Assessment Task 3: Reflective Portfolio – Ethical and Professional Reflection
Students critically evaluate their design and implementation process, articulating how ethical frameworks, governance principles, and cultural awareness informed their work. They propose strategies for trustworthy, inclusive, and human-centred AI practice.
30%
Learning and teaching strategy and rationale
This unit integrates advanced theoretical understanding with applied professional practice to equip students with the capabilities required for responsible and culturally informed AI development. Students engage with interactive learning materials, scholarly readings, guided discussions, and complex real-world case studies to investigate the ethical, human-centred, cultural, and governance implications of AI. Learning activities are designed to cultivate higher-order skills in analysis, ethical reasoning, reflective judgment, and responsible design, encouraging students to examine how values, power, and organisational systems shape technological outcomes. Through problem-based tasks and applied design exercises, students translate conceptual frameworks into context-sensitive decision-making, demonstrating the capacity to evaluate risks, justify design choices, and apply relevant standards and governance principles. Formative feedback supports ongoing refinement of professional judgment and self-directed learning. Assessments are scaffolded to develop increasing levels of complexity across analysis, evaluation, communication, and design, ensuring clear alignment between learning outcomes, learning activities, and the expectations of postgraduate professional practice.