The Italian Data Protection Authority has concluded its investigation into OpenAI, imposing a €15 million fine and mandating a six-month public information campaign to address privacy concerns related to ChatGPT, the generative AI chatbot.
The investigation, which began in March 2023, revealed that OpenAI failed to notify Italian authorities about a data breach that occurred during the same month. Additionally, the authority found that OpenAI processed users’ data for training ChatGPT without establishing a valid legal basis, breaching transparency obligations under the General Data Protection Regulation (GDPR).
Further violations included the absence of adequate age verification measures, exposing children under 13 to potentially inappropriate chatbot responses. The lack of transparency and insufficient mechanisms for protecting personal data prompted corrective and sanctioning actions by the Italian regulator.
As part of the measures, OpenAI must conduct an institutional communication campaign across radio, television, newspapers, and online platforms. This campaign, designed in collaboration with the Italian Data Protection Authority, aims to educate the public on how ChatGPT collects and processes personal data for AI training. It will also inform users about their rights to object to data processing and rectify or delete their information.
OpenAI’s fine reflects the company's cooperative stance during the investigation. However, as the company established its European headquarters in Ireland during the probe, the Italian authority has referred the case to Ireland’s Data Protection Commission under the GDPR’s one-stop-shop mechanism for further investigation into unresolved violations.