GDPR and the development of virtual agents - a good thing or a necessary evil?

Rok Naraks, Head of AI Development, 2Mobile • mar. 18, 2024

Since GDPR came into force in 2018, many of us who work in marketing and deal with data processing in one way or another have grown impatient with the legal formalities surrounding the capturing and processing of personal data. However, the fact is that this is a crucial aspect of digital marketing and the foundation of a good user experience and, of course, legal compliance.


In a world where chatbots have become an important part of business strategy, it is essential that everyone who processes EU citizens' personal data ensures a high level of data protection and transparency. The GDPR explicitly requires a high level of privacy and transparency in personal data processing, which is a particularly important consideration in the development of chatbots that frequently process sensitive information.


Integrating privacy mechanisms directly into the operation of a bot ensures that privacy protection is built right into the bot technology from the start. In this context, stand-alone chatbots, which use their own databases and conversational algorithms, serve as an extra layer of security by not sharing personal data with third parties. 


Chatbots process a wide range of data


Conversation bots are classified as a tool for collecting and processing data and, making them subject to GDPR legislation. Consequently, they must comply with key GDPR principles such as obtaining user consent, ensuring transparency of all processing operations, providing tools to respect user rights, limiting data retention and implementing data security. Since conversation bots are often used to generate leads and collect information (e.g., names, email addresses, phone numbers, etc.), the GDPR requires companies to clarify how this data will be used.

Informed consent and recent regulations such as the DMA (Digital Markets Act) play a key role in the UX of various digital platforms and digital media, as we have recently seen with Facebook and other online tools, where users now have the option to unbundle Facebook, Instagram, Threads and Facebook Messager user accounts. In practice, this means that users need to be clearly informed before they begin using a platform about how their data is collected, used and stored, and they need to be able to revoke their consent at any time. This requires transparency from the owner of the tool or technological solution, not only just in terms of data, but also at the level of the algorithmic processes, which often remain hidden from the eyes of the user.


Ensuring GDPR compliance when developing chatbots in the European Union


In the development of any data-driven tool, strict compliance with privacy regulations such as the GDPR (General Data Protection Regulation), the DMA (Digital Markets Act) and the forthcoming AI Act, is paramount. With these acts, the European Union mandates that all entities processing personal data of EU citizens maintain a high level of data protection and transparency in their operations. This is particularly important for the development of conversational bots, which often rely on the processing of sensitive information.


At 2Mobile, we recognize that, as developers of conversational bots, complying with the GDPR and other relevant acts is not merely a legal obligation but also a key element for building trust with our customers. Therefore, when designing virtual agents, we are careful to ensure that our rights and those of the client are clearly and contractually defined, that all collected data is handled with the utmost care, and that users' rights concerning their data are clearly explained.


Virtual agents powered by Boost.ai technologies provide autonomy and security


Virtual agents powered by the technology of Boost.ai, our Norwegian partner and a major challenger in the conversational AI segment, are extremely secure from a data security perspective. They operate on autonomous databases that can be deployed either in a private cloud (AWS) located within the EU or on a local company server. This setup ensures an extra layer of security, as personal data is not shared with third parties and is only owned and managed by the company. Crucially, these systems use advanced encryption and anonymization techniques to further protect users' data.

 

Integration with ChatGPT and use of large language models



ChatGPT, developed by OpenAI, is an example of an advanced conversational AI model based on a Large Language Model (LLM), which has recently been increasingly regulated (trained) to respect privacy and security of personal data. Conversational bot developers also use this cutting-edge language model to build and train autonomous conversational bots. It is important that we follow the guidelines set out by the GDPR here as well.


In practice, this means that enterprise conversational bots built on proprietary databases and algorithms (e.g. the Boost.ai platform) may not share personal data with an LLM without the user's explicit consent. They may be used to generate responses that do not contain information of a private nature. In practice, once trained, an autonomous conversational bot can answer 90% of requests without using the LLM model, but this percentage improves over time as the conversational bot continuously learns and its ability to answer questions autonomously increases. However, if the user opts to enable the use of the LLM functionality for additional and more comprehensive answers, they assume control of what data is shared and take responsibility for their personal data.

 


After GDPR comes the Artificial Intelligence Act


Amidst the ongoing digital transformation, where the GDPR already requires a high level of privacy and transparency when processing personal data, the EU is reinforcing its regulatory framework with a new legislative proposal - the Artificial Intelligence (AI) Act. This legislative proposal introduces a classification of AI systems according to the risk they pose to the health, safety or fundamental rights of individuals and sets out different requirements for their development and use.


The integration of the requirements of the AI Act into the development and management of conversational bots in the EU not only strengthens the GDPR's privacy and data protection requirements, but also introduces strict frameworks to ensure the safe, responsible and transparent use of AI technologies. Moving forward, developers and businesses will need to navigate these two regulatory areas even more adeptly to ensure full compliance of their products and services, while remaining competitive and innovative in a rapidly evolving technological landscape.


Most importantly, in this context, conversational bots that are fully open and use large language models are much less secure and trustworthy compared to conversational bots based on autonomous technology solutions. Developing GDPR-compliant conversational bots is therefore not only a legal obligation, but also an opportunity to build trust and deliver a better user experience.

 

Share by: