Britain is cracking down on AI chatbots that scrape people’s data

Britain is cracking down on AI companies that collect data without authorization, amid concerns that chatbots are harvesting information on their users without permission.

Britain is cracking down on AI chatbots that scrape people's data

Britain’s national information watchdog has warned AI firms that they might face fines if they fail to obtain consent before collecting people’s personal data.

According to the information commissioner, organizations that use generative AI technology are subject to data protection rules, which means they must obtain consent or demonstrate a legitimate interest in collecting personal information.

Regulators are said to be increasingly concerned about the privacy implications of the generative AI boom, which has been pioneered by startups such as OpenAI and its popular ChatGPT model.

It refers not just to the personal data collected from individuals via large language models such as ChatGPT, but also to the corporations’ scraping of massive amounts of data from across the internet, some of which are personal.

Companies such as Amazon, JPMorgan, and Accenture have prohibited their employees from utilizing the application out of concern for how the information given may be used.

Under data protection rules, the information commissioner can issue letters forcing companies to explain their operations, issue enforcement orders requiring firms to cease conduct or impose fines of up to £17 million.

“We will act where organizations are not following the law and considering the impact on individuals,” said a spokeswoman for Britain’s information commissioner, John Edwards.

Ofcom also intends to impose stricter controls on AI companies in order to prevent the technology from being abused.

The organization, which serves as the new online safety regulator for social media and technology corporations, intends to demand risk evaluations for any new AI.

The crackdown comes after Rishi Sunak met with executives from three of the world’s largest AI firms, OpenAI, Google-backed Anthropic, and DeepMind, last week, amid growing concerns about the technology’s impact on society.

The Prime Minister stated that the technology needed “guardrails” and that he had discussed the potential of deception as well as larger “existential” threats.

The competition watchdog has already initiated an investigation into the AI sector, including an examination of the technology’s safety consequences.

In March, Italy’s data protection authority temporarily disabled ChatGPT because there was “no legal basis that justifies the massive collection and storage of personal data.”

OpenAI replied by implementing guidelines across Europe that let anyone opt out of processing using an online form. It also updated its privacy policies and established a right for users to have erroneous information erased, comparable to the right to be forgotten in data legislation.

“There is a challenge with consent as a basis for processing data at the scale of ChatGPT,” said Andrew Strait, associate director of the Ada Lovelace Institute. It is quite difficult to explain to the typical individual what is occurring with their data.

“Does it vanish? Is it collecting your data? Is it being reused? Consent works best when you understand exactly what you are consenting to.”

“Organisations developing or using generative AI should consider their data protection obligations from the start,” said a representative for the Information Commissioner.

“Data protection law continues to apply even when personal information is processed from publicly accessible sources.” If you are developing or deploying generative AI that handles personal data, you must do so legally. Consent or reasonable interests are examples of this.”

According to Lorna Woods, an internet law lecturer at Essex University, “Data protection rules apply whether or not you’ve made something public.”


Related:

The Author:

Leave A Reply

Your email address will not be published.



All content published on the Nogoom Masrya website represents only the opinions of the authors and does not reflect in any way the views of Nogoom Masrya® for Electronic Content Management. The reproduction, publication, distribution, or translation of these materials is permitted, provided that reference is made, under the Creative Commons Attribution 4.0 International License. Copyright © 2009-2024 Nogoom Masrya®, All Rights Reserved.