-->

Does ChatGPT compromise privacy?

Screen showing ChatGPT-4

In spring 2023, OpenAI CEO, Sam Altman, told the US Congress that one of his worst fears was that artificial intelligence (AI) technology would “cause significant harm to the world”.

It was a stark message to all those listening regarding the potential power and possible danger of AI. It was also a wake-up call to governments to sit up, take notice and learn lessons from the failure to regulate social media when it became widespread over a decade ago.

The call for regulation of AI is not just being heard by those involved in developing AI software. Governments are concerned about the technology’s rapid growth, its impact on the spread of misinformation, the threat it poses to consumer privacy, and the mass collection of personal data.

In a recent article, cybersecurity company and virtual private network provider, ExpressVPN explored how deep fakes created using an AI process known as deep learning, are becoming increasingly hard to detect.

While the recent AI-generated image of the pope in a puffer jacket was an amusing viral moment for many of us, clearly false narratives pose a serious threat by providing the perfect breeding ground for disinformation.

Data glutton

Like many AI platforms, OpenAI’s text generation tool ChatGPT, works by hoovering up vast amounts of data to train its algorithms for optimum performance. As it scoops up information, personal data comes along with it, whether that’s in a transparent way via social media pages, blogs and websites, or in a more covert form through an internet protocol (IP) address.

In America, privacy laws are complex and vary according to each state. In Europe, citizens are protected by the General Data Protection Regulation (GDPR). Introduced in 2018, these protections shield users from third parties collecting and using their data, simply because it exists online.

GDPR states (among other requirements) that personal data must be kept to a minimum, individuals have the right to access and request the erasure of their data, and that the data must be kept securely. Accountability is also baked into the protections provided by GDPR, whereby organisations need to keep records of how the data they hold is stored, processed and shared.

This level of data protection clearly poses a challenge for ChatGPT, which relies on having access to as much information as possible. In light of this, Italy temporarily banned ChatGPT in March 2023, stating that the text generation tool contravened the country’s privacy laws.

According to the BBC, the Italian data protection watchdog, Garante, said there was no legal basis for “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”.

Furthermore, it stated the app “exposes minors to absolutely unsuitable answers compared to their degree of development and awareness”. Italy has since revoked the ban. According to a press release from Garante on 24 April 2023, OpenAI has agreed to, “enhanced transparency and rights for European users and non-users”.

A global threat

While copyright infringement regarding ChatGPT-generated content is being openly discussed, the privacy issues regarding the AI platform don’t appear to have the same priority, despite the risks it poses. There have already been some shocking stories about misinformation relayed as truth by ChatGPT.

Possibly the most famous example so far being Jonathan Turley, an American professor who was wrongly accused of sexual harassment after ChatGPT cited a non-existent Washington Post report.

Threat actors – malicious individuals or groups who act in the cyber world – could cause significant harm by leaking data collected by ChatGPT, while the personal data held by ChatGPT is a cyber criminal’s fantasy.

Another privacy risk posed by the development of AI is the generation of deep fakes and how this could tap into what is known as the Mandela Effect.

This phenomenon was named as a result of the widespread belief that Nelson Mandela died in the 1980s when, in fact, he lived until 2013. In essence, the Mandela Effect is when large groups of people believe erroneous facts to be true.

While AI is certainly not responsible for the Mandela Effect, which is thought to be caused by a glitch in the human psyche, this frailty is open to corruption using AI and the malicious spread of disinformation about individuals, political parties or even entire nations.

Brave new world

OpenAI founders Greg Brockman and Ilya Sutskever, along with its CEO Sam Altman, continue to call for international regulation of AI, as well as a coordinated effort to create ‘industry standards’.

In an open letter, posted on the OpenAI website, they wrote: “In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.”



from... sciencefocus.com