Italy's data protection watchdog has imposed a hefty fine of 15 million euros on OpenAI, a prominent U.S. artificial intelligence company, following an investigation into its popular chatbot ChatGPT. The fine was issued by the country's privacy watchdog, known as Garante, after finding that OpenAI processed users' personal data for training ChatGPT without a proper legal basis, thereby violating transparency principles and failing to provide necessary information to users.
OpenAI has expressed its disagreement with the decision, deeming it 'disproportionate,' and has announced its intention to appeal the ruling. The company highlighted its efforts to collaborate with authorities, including reinstating ChatGPT in Italy after being ordered to cease its operations in 2023. Despite the acknowledgment of its privacy protection measures, OpenAI criticized the fine as being nearly 20 times the revenue generated in Italy during the relevant period.
However, OpenAI reiterated its commitment to collaborating with privacy regulators globally to develop AI technologies that uphold privacy rights. The investigation also revealed that OpenAI lacked an adequate age verification system to prevent users under 13 years old from accessing inappropriate AI-generated content.
In response to the findings, the Italian authority has mandated OpenAI to conduct a six-month awareness campaign across various Italian media platforms to educate the public about ChatGPT, particularly focusing on data collection practices. The growing popularity of generative artificial intelligence systems like ChatGPT has attracted regulatory scrutiny on both sides of the Atlantic.
Regulators in the U.S. and Europe have been closely monitoring companies like OpenAI that are at the forefront of the AI industry. Governments worldwide are also formulating regulations to mitigate risks associated with AI technologies, with the European Union's AI Act leading the way as a comprehensive framework for governing artificial intelligence.