Бакалавриат
2024/2025




Безопасность систем на базе LLM
Статус:
Курс по выбору (Прикладная математика и информатика)
Когда читается:
4-й курс, 3 модуль
Онлайн-часы:
20
Охват аудитории:
для своего кампуса
Язык:
английский
Course Syllabus
Abstract
LLMs are becoming more and more powerful, reliable and cheap, and therefore are used to solve problems in more and more applications. At the same time, LLMs, by virtue of their peculiarities, introduce new classes of vulnerabilities that require appropriate protection. In this short hands-on course, we will look at how (and why) jailbreaks and seed injections work, how to detect and prevent them, and how to use standard frameworks to assess the security of LLM systems.
Learning Objectives
- To understand standard methods of protecting LLM systems
- To apply software packages to protect LLM-applications
- To understand information security frameworks aimed at securing LLM-based systems
Expected Learning Outcomes
- To know the main vulnerabilities and security issues of LLM-based systems
- To understand attack techniques and methods, such as jailbreaks and inoculum injections
- To understand the conceptual causes of LLM-specific security issues
- To apply software tools to find vulnerabilities in deployed LLMs
Course Contents
- Security of large language models: attack techniques
- Security of large language models: defense techniques
- Integrated security of LLM systems
Assessment Elements
- Homework 1Given after Lecture 1. This involves applying the techniques from the lecture to attack a local LLM in a Jupyter notebook, searching for information on attacks in open sources, using automated vulnerability identification tools, and writing a report on the results.
- Homework 2Issued after Lecture 2. It implies implementation and application in Jupyter notebook of techniques from the lecture to protect local LLM service and evaluation of their effectiveness.
- TestGiven after Lecture 3. A quiz designed to review and consolidate the knowledge from Lectures 1-3 and the frameworks presented in Lecture 3.
Bibliography
Recommended Core Bibliography
- Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., & Catanzaro, B. (2019). Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism.
Recommended Additional Bibliography
- Прагматичный ИИ : машинное обучение и облачные технологии, Гифт, Н., 2019