Predicting the future of technology has never been trickier.
Generative artificial intelligence tools experienced "explosive growth" during 2023, according to McKinsey research, ballooning from an interesting concept to technology used by one in three companies.
Amazon's head of responsible AI, Diya Wynn, says she watched with amusement as analysts scrambled to revise predictions for the technology two months after making them.
"It's hard to look that far in advance and figure out what will happen because the technology is advancing at a pace that is much more rapid," she said.
But experts are confidently predicting AI and tools such as ChatGPT, Microsoft Copilot, Google Bard and Amazon Q will continue to soar in 2024, as more companies experiment, make discoveries and deploy AI as part of everyday operations.
They also warn AI does not suit every application and companies need to take care to meet growing government regulations, and ensure they do not become an example of how not to use it.
Generative AI emerged in the 1960s but evolved with the introduction of generative adversarial networks in 2014, benefited from the use of large language models in 2018, and began to create photorealistic images in 2022.
But 2023 became the year the technology went mainstream for many businesses in a way few could predict, says University of the Sunshine Coast computer science lecturer Erica Mealy.
"If you'd asked us two years ago if we'd be using GPT like we are now, no one would have thought that," she said.
"From 2022 to 2023, it's moved quite rapidly and we're seeing jobs where it's used as part of daily work."
Australian companies are increasingly embracing the use of generative AI, Amazon Web Services chief technologist Rada Stanic says, and using it in unexpected ways.
"We're starting to see Australian brands and companies leverage this technology for very useful business outcomes," she told AAP.
"Adore Beauty are using generative AI to summarise all of the reviews they have and get insights into specific aspects of the products they're selling, which is a quick way of creating value and doing things more productively."
Ms Stanic predicts 2024 will see many more generative AI experiments and more examples of ways it can improve productivity and customer experiences.
"We are going to start seeing more experimentation, transitioning into production-grade outcomes," she said.
"There is a view among companies of all sizes and across all industries that they can't afford not to experiment with AI and see what the technology can do for them."
But the use of artificial intelligence can also be controversial, with a growing number of examples where the technology has had harmful consequences.
China-based education firm iTutorGroup was successfully sued for using AI software to reject older applicants, and a legal brief prepared by a New York lawyer was found to be full of cases that did not exist after ChatGPT "hallucinated" examples.
Ms Diya, who created Amazon Web Services' approach to the responsible use of AI, says the technology holds significant potential pitfalls for companies who do not adequately consider, test and research its use.
She cites the example of a US company handling parolees, who sought to use AI to extract more information from recorded conversations.
"They wanted to use voice analysis to gain intel about other crimes and other inmates but there was a question about whether they would respond if they found out an inmate might harm themselves," she said.
"A fundamental question was, is that in alignment with the company's policy when one of the core tenets was to value inmate wellbeing?"
Ms Diya says all companies seeking to deploy generative AI should carefully consider their values, all customers who would use the technology, their processes and potential risks.
"It's not a one-size-fits-all tool that we should stick everywhere just because it's really cool and we're excited about it," she said.
"We actually have to evaluate each use case because sometimes we don't have adequate data and that might be a good reason not to use AI."
This year, Australian firms will also have to consider what regulations apply to the use of generative AI.
The Australian Signals Directorate on Thursday issued guidance for businesses using the technology, while the federal government outlined its approach in the Safe and Responsible AI interim report on January 17.
In the report, the government pledged to develop a voluntary AI Safety Standard and expert advisory group, and work on mandatory rules for testing and labelling AI.
Simon Lucey, Australian Institute for Machine Learning director at the Univerity of Adelaide, says Australia's regulatory approach needs to address potential risks while also helping companies to use it in positive ways.
"It has to be a balanced approach (that we take) because we don't want to puff out the AI flame - it offers some real opportunity in terms of productivity and making lives better for Australians," he said.
"The European Union is taking a much harder line, whereas the US and the UK are taking a softer line."
Professor Lucey says 2024 will see more companies develop ways to "detect when generative AI has been used" and to fact-check text and images to increase trust in what we see.
But he says Australian government and businesses should also take the opportunity to do more than just use generative AI but to develop an Australian version of the technology, using local, high-quality data with strong privacy standards.
"We potentially want to use AI for different parts of government and we won't be able to because we can't guarantee where those AI models have come from or what data sources they've been trained on," he said.
"The idea of sovereign AI is really important and something the government needs to think about a lot more."