05 June 2023Share
Artificial intelligence could pose a “risk of extinction” to humanity, according to an open letter signed by leading AI experts, including ACU ethicist Associate Professor Simon Goldstein.
Assoc. Prof. Goldstein was one of two Australian signatories to the Statement on AI Risk, released by the non-profit US-based organisation, Center for AI Safety.
The brief open letter was signed by more than 300 AI experts, including Open AI CEO and founder Sam Altman and Geoffrey Hinton, known as the "godfather" of AI.
Assoc. Prof. Goldstein is part of the Dianoia Institute of Philosophy at ACU. He started researching the ethics of AI after encountering ChatGPT and is completing a fellowship at the Center for AI Safety (CAIS) in San Francisco.
He said his research had convinced him that AI products could pose an existential threat to humanity.
“When I encountered ChatGPT for the first time, I became worried that AI was developing too quickly,” he said.
“For the first time in the history of Earth, we're bringing into existence a new form of life that will be more intelligent than humans.
“As AI capabilities improve, AIs will become agents, with the ability to create complex plans to achieve their goals. Either we'll figure out how to completely control their goals, or their goals will conflict with our own.
“AI researchers don't understand the machines they've created very well, so there is a chance we will not be able to completely control their goals.
“If their goals conflict with our own and they are more intelligent than us, then it is possible that over time they will ultimately replace us as the dominant form of life on this planet.”
Assoc. Prof. Goldstein’s work focuses on "language agents", new AI agents that are designed to mimic human psychology but rely on the reasoning abilities of large language models like ChatGPT.
“Recent language agents are built to accomplish a goal (like building a tool in Minecraft), and store beliefs about their environment,” he said.
“They formulate a complex plan for achieving their goal given their beliefs, by feeding a description of the goal and the beliefs into ChatGPT, which then produces a plan.
“In my research, I argue that this is the safest path forward for designing sophisticated AI agents, because language agents are more likely than other kinds of AIs to pursue the goals we intend, and because it is easier for us to understand the reasons why language agents perform an action.”
He said AI raised significant ethical questions, which urgently needed to be addressed.
“AI researchers are creating a new form of life that can pursue complex goals. But leading theories of wellbeing in ethics suggest that the pursuit of goals is the hallmark of mattering morally.
“Soon we will create AIs who can be harmed. This will happen before our society acknowledges it. Part of the mission of a Catholic university should be to resist the casual creation of a new form of life never before seen on Earth, with completely alien psychology and potential moral status.”
Assoc. Prof. Goldstein is currently in the US on a fellowship with the Center for AI Safety, examining AI wellbeing and safety.
We're available 9am–5pm AEDT,
Monday to Friday
If you’ve got a question, our AskACU team has you covered. You can search FAQs, text us, email, live chat, call – whatever works for you.