Unit rationale, description and aim

Artificial Intelligence systems increasingly support high-stakes decisions, yet failures often stem not from model accuracy but from weaknesses in system architecture, data pipelines, security, and lifecycle governance. AI systems inherit traditional software vulnerabilities—such as insecure data flows, dependency risks, adversarial threats, and inadequate validation—and these risks are amplified when AI is deployed without strong Software Development Lifecycle (SDLC) practices. Effective AI development therefore requires early risk identification, secure-by-design engineering, robust data quality controls, and continuous monitoring to prevent societal, cultural, and organisational harm. 

This unit positions AI as a socio-technical system that must be engineered and governed holistically across its lifecycle. Students examine how AI models and workflows interact with organisational processes and user needs, and how responsible system design depends on anticipating risks at the earliest stages. The unit highlights key international standards underpinning trustworthy AI, including ISO/IEC 42001 AIMS, ISO/IEC 5259 Data Quality for AI, and global incident-reporting frameworks. Students also explore inclusive and culturally aware co-design approaches, including principles informed by Australian Aboriginal and Torres Strait Islander perspectives, to support cultural safety and relational accountability. Through hands-on work with MLOps, secure cloud deployment, and automated testing, students learn to operationalise AI systems that are secure, transparent, reliable, and aligned with responsible innovation principles. 

The aim of this unit is to develop students’ capability to design, implement, and manage intelligent systems that are scalable, secure, transparent, and responsible, grounded in recognised AI lifecycle and governance standards. 

2026 10

Campus offering

Find out more about study modes.

Unit offerings may be subject to minimum enrolment numbers.

Please select your preferred campus.

  • Term Mode
  • ACU Term 4PU

Prerequisites

Nil

Learning outcomes

To successfully complete this unit you will be able to demonstrate you have achieved the learning outcomes (LO) detailed in the below table.

Each outcome is informed by a number of graduate capabilities (GC) to ensure your work in this, and every unit, is part of a larger goal of graduating from ACU with the attributes of insight, empathy, imagination and impact.

Explore the graduate capabilities.

Analyse AI systems engineering principles includin...

Learning Outcome 01

Analyse AI systems engineering principles including architecture, lifecycle management, secure-by-design practice, and co-design informed by Australian Aboriginal and Torres Strait Islander perspectives in line with ethical, cultural, and industry standards.
Relevant Graduate Capabilities: GC1, GC5, GC7, GC10

Design and implement automated AI pipelines using ...

Learning Outcome 02

Design and implement automated AI pipelines using responsible lifecycle practices.
Relevant Graduate Capabilities: GC3, GC7, GC8, GC10

Apply development operations and machine learning ...

Learning Outcome 03

Apply development operations and machine learning operations methods such as version control, continuous integration and delivery, and infrastructure-as-code to operationalise AI systems.
Relevant Graduate Capabilities: GC1, GC2, GC10

Evaluate and improve AI system performance, reliab...

Learning Outcome 04

Evaluate and improve AI system performance, reliability, fairness, cultural safety, and communicate decisions clearly.
Relevant Graduate Capabilities: GC7, GC8, GC10

Content

Topics will include:

  • Foundations of AI Systems Engineering; lifecycle, architecture, secure-by-design, and co-design approaches informed by Australian Aboriginal and Torres Strait Islander perspectives
  • Version Control and Experiment Tracking (Git, DVC, MLflow) 
  • Data Pipelines and Feature Engineering Automation 
  • Model Development, Benchmarking and Evaluation, and Cultural Safety
  • Model Packaging, APIs, and Containerisation (Docker, FastAPI); Testing and Validation within the AI Development Lifecycle
  • Continuous Integration / Continuous Deployment (CI/CD) for M Systems 
  • Monitoring, Model Drift, and Lifecycle Management; AI Governance Standards and Frameworks (AIMS, Data Quality, Incident Reporting)
  • Responsible AI Engineering and Sustainability: Integrative Case Study.

Assessment strategy and rationale

Assessments in this unit are designed to progressively develop students’ capability to design, operationalise, and evaluate AI systems in line with MLOps best practices and industry governance standards. The three assessment tasks move from conceptual understanding to practical implementation and, finally, to critical evaluation of responsible AI operations. 

Assessment 1 develops foundational skills in analysing AI system architecture, lifecycle components, secure-by-design principles, and MLOps workflow design. 

Assessment 2 extends this learning by requiring students to design and deploy an end-to-end automated machine-learning pipeline on a cloud platform, applying lifecycle, automation, and governance principles in practice. 

Assessment 3 consolidates learning through reflective evaluation of the system’s ethical, operational, and governance considerations, including alignment with recognised standards for AI management, data quality, cultural safety, and incident reporting. 

To pass the unit, students must demonstrate achievement of all learning outcomes and obtain an overall mark of 50% or higher. 

Overview of assessments

Assessment Task 1: AI System Architecture and MLO...

Assessment Task 1: AI System Architecture and MLOps Design Workbook

Students analyse and justify an AI system architecture, addressing lifecycle components, secure-by-design practices, data management, and culturally aware co-design, including Indigenous data considerations. The purpose of this task is to apply key AI governance standards and propose appropriate MLOps workflows to support responsible, compliant system development 

Weighting

25%

Learning Outcomes LO1, LO2, LO4
Graduate Capabilities GC1, GC7, GC8

Assessment Task 2: Applied MLOps Pipeline Project...

Assessment Task 2: Applied MLOps Pipeline Project

Students design, implement, and deploy an automated machine-learning pipeline incorporating data ingestion, training, validation, versioning, and CI/CD processes. They then evaluate system performance and operational reliability using lifecycle-based criteria to demonstrate responsible and robust pipeline development

Weighting

45%

Learning Outcomes LO1, LO2, LO3
Graduate Capabilities GC1, GC2, GC8, GC10

Assessment Task 3: Reflective Report – Responsibl...

Assessment Task 3: Reflective Report – Responsible Automation and Governance 

 Students critically evaluate the ethical, cultural (including considerations informed by Australian Aboriginal and Torres Strait Islander perspectives), operational, and governance implications of their deployed system. The report addresses fairness, cultural safety, transparency, accountability, sustainability, and alignment with recognised governance standards, including AI incident-reporting frameworks and responsible MLOps practice. 

Weighting

30%

Learning Outcomes LO1, LO3, LO4
Graduate Capabilities GC5, GC7, GC8, GC11

Learning and teaching strategy and rationale

This unit combines advanced conceptual learning with practical, hands-on activities that reflect real AI engineering workflows. Students engage with interactive materials to build capability across system design, automation, deployment, and lifecycle management.  Iterative feedback and reflective tasks help students link technical decisions with responsible innovation, including attention to societal, ethical, and Indigenous data considerations.  

This strategy promotes independent learning, critical analysis, and professional readiness, strengthening students’ ability to evaluate risks, justify design choices, and apply industry standards for trustworthy and sustainable AI.

Representative texts and references

Representative texts and references

Amershi, S., et al. (2019). Software Engineering for Machine Learning: A Case Study. 41st International Conference on Software Engineering: Software Engineering in Practice. DOI:10.1109/ICSE-SEIP.2019.00042

Chen, C., et al. (2022). Reliable Machine Learning: Applying SRE Principles to ML in Production. O’Reilly Media.

Crowe, R., Hapke, H., Caveness, E. & Zhu D. (2023). Machine Learning Production Systems: Engineering Real‑World ML Pipelines. O'Reilly Media.

Gift, N., & Deza. A., (2021). Practical MLOps: Operationalising Machine Learning Models. O’Reilly Media.

Introduction to machine learning operations (MLOps) - Training | Microsoft Learn   

Kelleher, J.D. (2019). Deep Learning. MIT Press. 

Microsoft Azure Machine Learning Documentation. Microsoft Learn.

MLOps: Continuous Delivery and Automation Pipelines in Machine Learning. Cloud Architecture Center, Google Cloud.

Practical MLOps: How to Get Ready for Production Models — O’Reilly Media.

Raj, E. (2021). Engineering MLOps: Rapidly Build, Test, and Manage Production‑ready ML Models. Packt Publishing.

Sculley, D. et al. (2015). Hidden Technical Debt in Machine Learning Systems. In C. Cortes, D.D.Lee, M.Sugiyama, & R. Garnett (Eds). NIPS'15: Proceedings of the 29th International Conference on Neural Information Processing Systems - Volume 2.

Treveil, M., Omont, N., Stenac, C., Lefevre, K., et al. (2020). Introducing MLOps: How to Scale Machine Learning in the Enterprise. O’Reilly Media.

Locations
Credit points
Year

Have a question?

We're available 9am–5pm AEDT,
Monday to Friday

If you’ve got a question, our AskACU team has you covered. You can search FAQs, text us, email, live chat, call – whatever works for you.

Live chat with us now

Chat to our team for real-time
answers to your questions.

Launch live chat

Visit our FAQs page

Find answers to some commonly
asked questions.

See our FAQs