Unit rationale, description and aim
Artificial Intelligence systems increasingly support high-stakes decisions, yet failures often stem not from model accuracy but from weaknesses in system architecture, data pipelines, security, and lifecycle governance. AI systems inherit traditional software vulnerabilities—such as insecure data flows, dependency risks, adversarial threats, and inadequate validation—and these risks are amplified when AI is deployed without strong Software Development Lifecycle (SDLC) practices. Effective AI development therefore requires early risk identification, secure-by-design engineering, robust data quality controls, and continuous monitoring to prevent societal, cultural, and organisational harm.
This unit positions AI as a socio-technical system that must be engineered and governed holistically across its lifecycle. Students examine how AI models and workflows interact with organisational processes and user needs, and how responsible system design depends on anticipating risks at the earliest stages. The unit highlights key international standards underpinning trustworthy AI, including ISO/IEC 42001 AIMS, ISO/IEC 5259 Data Quality for AI, and global incident-reporting frameworks. Students also explore inclusive and culturally aware co-design approaches, including principles informed by Australian Aboriginal and Torres Strait Islander perspectives, to support cultural safety and relational accountability. Through hands-on work with MLOps, secure cloud deployment, and automated testing, students learn to operationalise AI systems that are secure, transparent, reliable, and aligned with responsible innovation principles.
The aim of this unit is to develop students’ capability to design, implement, and manage intelligent systems that are scalable, secure, transparent, and responsible, grounded in recognised AI lifecycle and governance standards.
Learning outcomes
To successfully complete this unit you will be able to demonstrate you have achieved the learning outcomes (LO) detailed in the below table.
Each outcome is informed by a number of graduate capabilities (GC) to ensure your work in this, and every unit, is part of a larger goal of graduating from ACU with the attributes of insight, empathy, imagination and impact.
Explore the graduate capabilities.
Analyse AI systems engineering principles includin...
Learning Outcome 01
Design and implement automated AI pipelines using ...
Learning Outcome 02
Apply development operations and machine learning ...
Learning Outcome 03
Evaluate and improve AI system performance, reliab...
Learning Outcome 04
Content
Topics will include:
- Foundations of AI Systems Engineering; lifecycle, architecture, secure-by-design, and co-design approaches informed by Australian Aboriginal and Torres Strait Islander perspectives
- Version Control and Experiment Tracking (Git, DVC, MLflow)
- Data Pipelines and Feature Engineering Automation
- Model Development, Benchmarking and Evaluation, and Cultural Safety
- Model Packaging, APIs, and Containerisation (Docker, FastAPI); Testing and Validation within the AI Development Lifecycle
- Continuous Integration / Continuous Deployment (CI/CD) for M Systems
- Monitoring, Model Drift, and Lifecycle Management; AI Governance Standards and Frameworks (AIMS, Data Quality, Incident Reporting)
- Responsible AI Engineering and Sustainability: Integrative Case Study.
Assessment strategy and rationale
Assessments in this unit are designed to progressively develop students’ capability to design, operationalise, and evaluate AI systems in line with MLOps best practices and industry governance standards. The three assessment tasks move from conceptual understanding to practical implementation and, finally, to critical evaluation of responsible AI operations.
Assessment 1 develops foundational skills in analysing AI system architecture, lifecycle components, secure-by-design principles, and MLOps workflow design.
Assessment 2 extends this learning by requiring students to design and deploy an end-to-end automated machine-learning pipeline on a cloud platform, applying lifecycle, automation, and governance principles in practice.
Assessment 3 consolidates learning through reflective evaluation of the system’s ethical, operational, and governance considerations, including alignment with recognised standards for AI management, data quality, cultural safety, and incident reporting.
To pass the unit, students must demonstrate achievement of all learning outcomes and obtain an overall mark of 50% or higher.
Overview of assessments
Assessment Task 1: AI System Architecture and MLO...
Assessment Task 1: AI System Architecture and MLOps Design Workbook
Students analyse and justify an AI system architecture, addressing lifecycle components, secure-by-design practices, data management, and culturally aware co-design, including Indigenous data considerations. The purpose of this task is to apply key AI governance standards and propose appropriate MLOps workflows to support responsible, compliant system development
25%
Assessment Task 2: Applied MLOps Pipeline Project...
Assessment Task 2: Applied MLOps Pipeline Project
Students design, implement, and deploy an automated machine-learning pipeline incorporating data ingestion, training, validation, versioning, and CI/CD processes. They then evaluate system performance and operational reliability using lifecycle-based criteria to demonstrate responsible and robust pipeline development
45%
Assessment Task 3: Reflective Report – Responsibl...
Assessment Task 3: Reflective Report – Responsible Automation and Governance
Students critically evaluate the ethical, cultural (including considerations informed by Australian Aboriginal and Torres Strait Islander perspectives), operational, and governance implications of their deployed system. The report addresses fairness, cultural safety, transparency, accountability, sustainability, and alignment with recognised governance standards, including AI incident-reporting frameworks and responsible MLOps practice.
30%
Learning and teaching strategy and rationale
This unit combines advanced conceptual learning with practical, hands-on activities that reflect real AI engineering workflows. Students engage with interactive materials to build capability across system design, automation, deployment, and lifecycle management. Iterative feedback and reflective tasks help students link technical decisions with responsible innovation, including attention to societal, ethical, and Indigenous data considerations.
This strategy promotes independent learning, critical analysis, and professional readiness, strengthening students’ ability to evaluate risks, justify design choices, and apply industry standards for trustworthy and sustainable AI.