Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Al Jazeera
Al Jazeera
Technology
Erin Hale

Laws, school bans and Sam Altman drama: the big developments in AI in 2023

2023 was a milestone year for AI [File: Aly Song/Reuters]

The artificial intelligence (AI) industry began 2023 with a bang as schools and universities struggled with students using OpenAI’s ChatGPT to help them with homework and essay writing.

Less than a week into the year, New York City Public Schools banned ChatGPT – released weeks earlier to enormous fanfare – a move that would set the stage for much of the discussion around generative AI in 2023.

As the buzz grew around Microsoft-backed ChatGPT and rivals like Google’s Bard AI, Baidu’s Ernie Chatbot and Meta’s LLaMA, so did questions about how to handle a powerful new technology that had become accessible to the public overnight.

While AI-generated images, music, videos and computer code created by platforms such as Stability AI’s Stable Diffusion or OpenAI’s DALL-E opened up exciting new possibilities, they also fuelled concerns about misinformation, targeted harassment and copyright infringement.

In March, a group of more than 1,000 signatories, including Apple co-founder Steve Wozniak and billionaire tech entrepreneur Elon Musk, called for a pause in the development of more advanced AI in light of its “profound risks to society and humanity”.

While a pause did not happen, governments and regulatory authorities began rolling out new laws and regulations to set guardrails on the development and use of AI.

While many issues around AI remain unresolved heading into the new year, 2023 is likely to be remembered as a major milestone in the history of the field.

Drama at OpenAI

After ChatGPT amassed more than 100 million users in 2023, developer OpenAI returned to the headlines in November when its board of directors abruptly fired CEO Sam Altman – alleging that he was not “consistently candid in his communications with the board”.

Although the Silicon Valley startup did not elaborate on the reasons for Altman’s firing, his removal was widely attributed to an ideological struggle within the company between safety versus commercial concerns.

Altman’s removal set off five days of very public drama that saw OpenAI staff threaten to quit en masse and Altman briefly hired by Microsoft, until his reinstatement and the replacement of the board.

While OpenAI has tried to move on from the drama, the questions raised during the upheaval remain true for the industry at large – including how to weigh the drive for profit and new product launches against fears that AI could grow too powerful too quickly, or fall into the wrong hands.

Sam Altman was briefly fired from OpenAI [File: Lucy Nicholson/Reuters]

In a survey of 305 developers, policymakers, and academics carried out by the Pew Research Center in July, 79 percent of respondents said they were either more concerned than excited about the future of AI, or equally concerned as excited.

Despite AI’s potential to transform fields from medicine to education and mass communications, respondents expressed concern about risks such as mass surveillance, government and police harassment, job displacement and social isolation.

Sean McGregor, the founder of the Responsible AI Collaborative, said that 2023 showcased the hopes and fears that exist around generative AI, as well as deep philosophical divisions within the sector.

“Most hopeful is the light now shining on societal decisions undertaken by technologists, though it is concerning that many of my peers in the tech sector seem to regard such attention negatively,” McGregor told Al Jazeera, adding that AI should be shaped by the “needs of the people most impacted”.

“I still feel largely positive, but it will be a challenging few decades as we come to realise the discourse about AI safety is a fancy technological version of age-old societal challenges,” he said.

Legislating the future

In December, European Union policymakers agreed on sweeping legislation to regulate the future of AI, capping a year of efforts by national governments and international bodies like the United Nations and the G7.

Key concerns include the sources of information used to train AI algorithms, much of which is scraped from the internet without consideration of privacy, bias, accuracy or copyright.

The EU’s draft legislation requires developers to disclose their training data and compliance with the bloc’s laws, with limitations on certain types of use and a pathway for user complaints.

Similar legislative efforts are under way in the US, where President Joe Biden in October issued a sweeping executive order on AI standards, and the UK, which in November hosted the AI Safety Summit involving 27 countries and industry stakeholders.

China has also taken steps to regulate the future of AI, releasing interim rules for developers that require them to submit to a “security assessment” before releasing products to the public.

Guidelines also restrict AI training data and ban content seen to be “advocating for terrorism”, “undermining social stability”, “overthrowing the socialist system”, or “damaging the country’s image”.

Globally, 2023 also saw the first interim international agreement on AI safety, signed by 20 countries, including the United States, the United Kingdom, Germany, Italy, Poland, Estonia, the Czech Republic, Singapore, Nigeria, Israel and Chile.

AI and the future of work

Questions about the future of AI are also rampant in the private sector, where its use has already led to class-action lawsuits in the US from writers, artists and news outlets alleging copyright infringement.

Fears about AI replacing jobs were a driving factor behind months-long strikes in Hollywood by the Screen Actors Guild and Writers Guild of America.

In March, Goldman Sachs predicted that generative AI could replace 300 million jobs through automation and impact two-thirds of current jobs in Europe and the US in at least some way – making work more productive but also more automated.

Others have sought to temper the more catastrophic predictions.

In August, the International Labour Organization, the UN’s labour agency, said that generative AI is more likely to augment most jobs than replace them, with clerical work listed as the occupation most at risk.

Year of the ‘deepfake’?

The year 2024 will be a major test for generative AI, as new apps come to market and new legislation takes effect against a backdrop of global political upheaval.

Over the next 12 months, more than two billion people are due to vote in elections across a record 40 countries, including geopolitical hotspots like the US, India, Indonesia, Pakistan, Venezuela, South Sudan and Taiwan.

While online misinformation campaigns are already a regular part of many election cycles, AI-generated content is expected to make matters worse as false information becomes increasingly difficult to distinguish from the real thing and easier to replicate at scale.

AI-generated content, including “deepfake” images, has already been used to stir up anger and confusion in conflict zones such as Ukraine and Gaza, and has been featured in hotly contested electoral races like the US presidential election.

Meta last month told advertisers that it will bar political ads on Facebook and Instagram that are made with generative AI, while YouTube announced that it will require creators to label realistic-looking AI-generated content.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.