Connect with us

Italy

Italy bans OpenAI’s ChatGPT over privacy fears

Published

on

Italy’s privacy regulator has temporarily banned OpenAI’s ChatGPT, the smash-hit conversational A.I. that has over recent months impressed and concerned people in equal measure.

The Italian Data Protection Authority said Friday that ChatGPT was violating the European Union’s strict General Data Protection Regulation (GDPR) in multiple ways, ranging from the fact that it sometimes spews out incorrect information about people, to OpenAI’s failure to tell people what it’s doing with their personal data.

Until it can satisfy the privacy regulator that it has brought its practices into compliance with the GDPR, OpenAI has to stop processing the personal data of people in Italy, which means the authority wants it to stop serving users there. Under European law, personal data means any data that can be connected with an identifiable individual.

OpenAI responded quickly, saying in a Friday statement that it had disabled ChatGPT for users in Italy, to comply with the regulator’s wishes. It had 20 days to comply with the ban, or face fines that could theoretically go up to €20 million ($22 million) or 4% of global revenue, whichever is higher. OpenAI’s revenues are not publicly disclosed. According to OpenAI documents seen by Fortune, the company was projected to have less than $30 million in revenues in 2022 but was forecasting revenues would grow rapidly to exceed $1 billion by 2024.

ChatGPT is a conversational interface that sits on top of an A.I. system known as a large language model. These models are trained on vast amounts of text culled from the internet and from private data sources. It is not entirely clear whether the Italian privacy watchdog also wants ChatGPT to stop returning information relating to Italian individuals—this may also technically qualify as the processing of those people’s personal data.

“We are committed to protecting people’s privacy and we believe we comply with GDPR and other privacy laws,” an OpenAI spokesperson said. “We actively work to reduce personal data in training our A.I. systems like ChatGPT because we want our A.I. to learn about the world, not about private individuals.”

Growing sense of panic

It is unusual for a European privacy regulator to institute a temporary ban at the same time as launching an investigation into the target of the ban. The urgency of the move reflects a sense of panic that has become particularly apparent over the past couple days, regarding the potential dangers of today’s unprecedentedly powerful A.I. systems.

On Wednesday, a host of technologists and other experts—including Elon Musk and Apple cofounder Steve Wozniak—published an open letter calling on OpenAI and its peers to pause the development of next-generation A.I. models for at least half a year, so that industry and governments can draw up governance structures for systems like OpenAI’s GPT-4 and future, more powerful ones.

Then on Thursday, civil society groups in the U.S. and Europe called on regulators to force OpenAI to address some of the problems with ChatGPT. In the U.S., the Center for AI and Digital Policy (CAIDP) filed a complaint with the Federal Trade Commission (FTC), while in Brussels the European Consumer Organisation (BEUC) called on EU-level and national regulators to quickly launch investigations into ChatGPT.

Legal experts say EU-level action is unlikely while the bloc’s grand institutions continue to negotiate the wording of an A.I. Act that the European Commission proposed two years ago—lawmakers are currently scrambling to bring that proposal up to date so it can adequately address recently unveiled services like ChatGPT. However, the BEUC was also directing its call at national data protection watchdogs, among others, and it seems Rome has been quick to deliver.

“With the Italian data protection authority springing into action, we now need to see an investigation on these issues at EU level, but product safety and consumer protection authorities should also become active,” said BEUC deputy director general Ursula Pachl in an emailed statement.

Incorrect information

In a Friday statement, the Italian authority said OpenAI was breaking the GDPR by failing to give information to ChatGPT’s users—or to people whose personal data has been used to train the large language model—about the processing of their data. OpenAI’s failure to identify a legal basis for its processing of Italians’ personal data also allegedly falls foul of the GDPR; this is a serious issue that is currently plaguing many American tech companies.

Citing a relatively obscure provision of the GDPR, the Italian watchdog also said it is concerned that “the information provided by ChatGPT does not always correspond to the real data, thus determining an inaccurate processing of personal data.” This would be a novel legal hurdle for generative A.I. models, which regularly “hallucinate” or make up information.

The regulator also pointed out that OpenAI doesn’t have any system in place to verify that its users are over the age of 13, even though its terms of use set the age limit. This, it said, “exposes minors to absolutely unsuitable answers compared to their degree of development and self-awareness.”

OpenAI’s spokesperson said the company believes A.I. regulation is necessary, and it looks forward to working with the Italian regulator and “educating them on how our systems are built and used.”

“Our users in Italy have told us they find ChatGPT helpful for everyday tasks and we look forward to making it available again soon,” the spokesperson said.

Fortune has also sought comment from Microsoft, which recently integrated ChatGPT into its Azure OpenAI service.

This article was updated on April 1 to reflect OpenAI and BEUC’s statements.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement

Trending