The AI Privacy Risks & Mitigations Large Language Models (LLMs) report puts forward a comprehensive risk management methodology for LLM systems with a number of practical mitigation measures for common privacy risks in LLM systems.
In addition, the report provides use cases examples on the application of the risk management framework in real-world scenarios:
- first use case: a virtual assistant (chatbot) for customer queries,
- second use case: LLM system for monitoring and supporting student progress and,
- third use case: AI assistant for travel and schedule management.
Large Language Models (LLMs) represent a transformative advancement in artificial intelligence. They are deep learning models designed to process and generate human-like language trained on extensive datasets. Their applications are diverse, ranging from text generation and summarisation to coding assistance, sentiment analysis, and more.
The EDPB launched this project in the context of the Support Pool of Experts programme at the request of the Croatian Data Protection Authority (DPA).
Project completed by the external expert Isabel Barbera in February 2023.
Objective
The AI Privacy Risks & Mitigations Large Language Models (LLMs) report puts forward a comprehensive risk management methodology to systematically identify, assess, and mitigate privacy and data protection risks.
The report helps Data Protection Authorities (DPAs) to have a comprehensive understanding and state-of-the-art information on the functioning of LLMs systems and the risks associated with LLMs.