A European Union data protection taskforce, which has spent over a year evaluating the application of the EU’s data protection rules to OpenAI’s ChatGPT, released its preliminary conclusions on Friday. The key takeaway: privacy enforcers are still undecided on critical legal issues, such as the lawfulness and fairness of OpenAI’s data processing activities.
This issue is crucial because confirmed violations of the EU’s privacy regulations can lead to penalties of up to 4% of global annual turnover. Regulatory authorities also possess the power to halt non-compliant data processing. Consequently, OpenAI faces significant regulatory risk in the region while dedicated AI laws remain in development and years away from full implementation.
However, without clear guidance from EU data protection authorities on how existing laws apply to ChatGPT, OpenAI is likely to continue its operations as usual. This is despite an increasing number of complaints alleging that its technology violates various aspects of the General Data Protection Regulation (GDPR).
For instance, Poland’s data protection authority initiated an investigation following a complaint that ChatGPT fabricated information about an individual and refused to correct the errors. A similar complaint has recently been filed in Austria.
GDPR Concerns and Enforcement
Under GDPR, any collection and processing of personal data must comply with strict regulations. Large language models (LLMs) like OpenAI’s GPT, which powers ChatGPT, process vast amounts of personal data scraped from the internet, including social media posts. The GDPR empowers data protection authorities (DPAs) to order non-compliant data processing to cease, a powerful tool for regulating AI operations in the region.
Last year, Italy’s privacy watchdog temporarily banned ChatGPT from processing local user data, forcing OpenAI to halt its service in the country. The service resumed only after OpenAI made adjustments in response to demands from the Italian DPA. However, the investigation, including questions about the legal basis for OpenAI’s data processing activities, continues, leaving ChatGPT under ongoing legal scrutiny in the EU.
Legal Basis for Data Processing
Under the GDPR, entities processing personal data must have a legal basis for their activities. The regulation outlines six possible legal bases, though most are not applicable to OpenAI’s operations. The Italian DPA has already directed OpenAI that it cannot claim contractual necessity for processing personal data for AI training. This left OpenAI with two potential legal bases: obtaining user consent or claiming legitimate interests (LI), which requires a balancing test and allows users to object to data processing.
Since the intervention by Italian authorities, OpenAI appears to have shifted to claiming LI for processing personal data used for model training. However, a draft decision in January from the Italian DPA found OpenAI had violated GDPR, though details of these findings are not yet public, and a final decision is pending.
Addressing Lawfulness Issues
The taskforce’s report addresses the complex issue of legal compliance, emphasizing that ChatGPT needs a valid legal basis for all stages of personal data processing. These stages include the collection of training data, pre-processing activities like filtering, the training itself, and the handling of prompts and outputs from ChatGPT.
The initial stages of data processing, such as web scraping, pose unique risks to individuals’ fundamental rights. The report highlights the extensive volume of personal data ingested, which can cover sensitive areas of people’s lives. This includes special category data, such as health information, sexuality, and political views, which are subject to stricter processing conditions under the GDPR.
The taskforce underscores that just because data is public does not mean it is “manifestly” public, which would exempt it from some GDPR requirements. This distinction is crucial for ensuring that even public data is handled in compliance with privacy regulations.
The preliminary conclusions from the EU’s ChatGPT taskforce highlight ongoing uncertainties about the legal framework governing AI chatbots like ChatGPT. As regulatory authorities continue their evaluations, OpenAI must navigate a complex legal landscape to ensure compliance and mitigate regulatory risks. The taskforce’s insights underscore the importance of robust data protection measures and the need for clear legal guidelines to govern the rapidly evolving field of AI.
More Information visit Now Content Baskit.