ChatGPT, Google Bard and Microsoft Bing Chat have ignited public awareness of artificial intelligence (AI) and machine learning (ML) and fuelled privacy concerns related to AI and ML technologies.

However, these represent just one type of AI with many others forming the basis of technology solutions we interact with every day, from facial recognition systems to virtual assistants and computer-based decision-making tools. 

It’s a critical time for policy makers — governments around the world are looking at the implications of AI on individuals’ privacy rights and even some fundamental human rights.

In this post, we’ll explore some of the privacy considerations for businesses using and developing AI and ML solutions today.

Privacy Risk in Generative AI

Generative AI models such as ChatGPT and Google Bard are developed using vast amounts of training data that enables them to generate new data sharing similar characteristics. 

Since its release as a beta product last November, ChatGPT became popular because it can produce human-like responses to queries presented to it. It is being used by organizations to write emails, create content, to help in code development and more. 

However, these tools present a risk to organizations, who — if they permit their use — have limited control over the information entered into them. From a privacy perspective, this can leave businesses exposed. 

Many of these tools use input information to further develop and improve their AI language models. This means that company information — and potentially customer information — is being used for this purpose. There have already been reported data breaches linked to the use of ChatGPT. 

Entering personal information of employees and customers is likely to fall foul of privacy legislation including the EU’s GDPR, Australia’s APA and California’s CPRA. These laws require that individuals are made aware of the purposes for which their data is collected and processed; it’s unlikely that this will extend to use in generative AI models. 

Privacy Pitfalls in ML and Expert Systems 

Machine learning platforms are typically used to analyze and process large data sets to solve problems. Expert systems are AI-based platforms used to perform decision-making tasks. These types of systems are increasingly common within organizations. 

While popular forms of generative AI are external to most businesses, ML and expert systems are more often procured or developed for internal use.

These systems need to be trained using large datasets so they can ‘learn.’ Modern organizations hold incredible volumes of data, including personal information, that is valuable to this training process. However, as for generative AI, the use of this data is governed by privacy legislation and hasn’t been collected for this purpose.

In January 2021, the FTC filed a complaint against Everalbum for the way they used facial images from its users’ photos to train its facial recognition system without their consent, which it then sold as a service to enterprise customers. 

Many ML algorithms don’t need access to personally identifiable information for training purposes. Organizations have a responsibility to ensure that this information is removed from training data sets if it isn’t necessary. 

Expert systems, AI-based platforms used for decision-making in medical and financial settings (among others), have come under fire from legislators seeking to ensure that their decisions are fair and ethical. This means that the algorithms and the data used to train them doesn’t introduce bias that may result in discriminatory decision-making. 

AI and Evolving Privacy Laws

The use of personal information in AI and ML systems is, to an extent, addressed by existing privacy laws. However, policy makers and regulators recognize that there are gaps in current legislation that need to be addressed. 

The European Parliament proposed its AI Act earlier this year, which aims to regulate the development of AI in Europe. As well as prohibiting discriminatory uses of AI, the act seeks to protect consumers from high-risk AI and ensure transparency of use by businesses. 

In the US, the White House published its Blueprint for an AI Bill of Rights in October 2022, paving the way for state and federal policies governing these technologies. In June this year, the Australian Government too published a discussion paper for Safe and Responsible AI in Australia.  

There’s little doubt that further legislation is on the way and businesses need to be prepared. Businesses must start by ensuring their use of AL and ML complies with current privacy legislation. Understanding how they use AI and ML within their business processes, and the data they use to train in-house developed and procured systems, will be fundamental to complying with future regulation. 

Organizations must be able to demonstrate that AI-based decision-making platforms are not biased and, where there are provisions in current legislation, uphold individuals’ right for such decisions to be made by a person.

Want to keep up with all our blog posts? Subscribe to our newsletter!

Subscribe