A European regulator is investigating whether Google’s artificial intelligence model complies with privacy laws.
Ireland’s Data Protection Commission (DPC) is examining whether the tech giant performed a legally required data protection impact assessment before processing European Union residents’ personal data used in its Pathways Language Model 2, according to a Thursday (Sept. 12) press release.
The DPC is the regulatory body in Ireland charged with enforcing the EU’s General Data Protection Regulation (GDPR).
The impact assessment “is of crucial importance in ensuring that the fundamental rights and freedoms of individuals are adequately considered and protected when processing of personal data is likely to result in a high risk,” the release said.
Reached for comment by PYMNTS, a Google spokesman provided this statement: “We take seriously our obligations under the GDPR and will work constructively with the DPC to answer their questions.”
The news came the same day that Coimisiún na Meán, Ireland’s media regulator, sent notices to several tech companies — including Meta, TikTok and Google-owned YouTube — to see if they were complying with the EU’s Digital Services Act (DSA).
“Of the complaints we have from people in Ireland and across Europe about online platforms, 1 in 3 are about problems when reporting illegal content online,” said John Evans, the regulator’s digital services commissioner, in a press release. “We are intervening now to ensure that platforms follow the rules so that people can effectively exercise their rights under the DSA.’’
Meanwhile, PYMNTS examined the possible security threats posed by AI models, noting that the increased sophistication of cybercriminals and vast amount of data being generated and stored by businesses to train their in-house AI models has led to a “perfect storm” for data breaches.
“AI is vulnerable to hackers due to its complexity and the vast amounts of data it can process,” Jon Clay, vice president of threat intelligence at cybersecurity company Trend Micro, told PYMNTS in April. “AI is software, and as such, vulnerabilities are likely to exist which can be exploited by adversaries.”