German Federal Office for Information Security (BSI)
AI Security Concerns in a Nutshell is a guideline by the German Federal Office of Information Security that serves as an introductory resource for developers, offering insights into significant security concerns related to AI systems and potential defensive measures. While not exhaustive, it provides a foundational understanding of the subject. The guideline explores three main categories of potential attacks. The first one being evasion attacks where the attacks occur during the inference phase of a machine learning model, with the aim of causing misclassification. The second category consist of information extraction attack, also known as privacy or reconstruction attacks, where the attacks seek to reconstruct the model or its information using training data. The last category consists of poisoning and backdoor attacks. The poisoning attacks target the malfunctioning or degradation of machine learning models by manipulating their training datasets. Backdoor attacks, a subset of poisoning attacks, aim to create predetermined responses to specific triggers while maintaining the system's performance under normal conditions. The guideline provides an overview of initial defence strategies for each of these attack categories. The guideline was publicised in May 2023.