Basics of AI risk management
The European Union’s AI Regulation (AI Regulation) has come into force and requires risk-based AI management. The higher the risk of an AI system, the stricter and more comprehensive the requirements for its use. For companies, this means identifying AI systems, carrying out a risk assessment and taking measures based on this. This article serves as a guide for this AI risk management process.
Do you want to make sure that your company meets the requirements of the GDPR? With // PRIMA, you get a ready-to-use tool that offers everything you need for optimized data protection management. Try our all-in-one tool now for 14 days free of charge! |
1. Basics of AI risk management: the AI inventory
The basis of every risk classification is the recording of all potential risks. In the case of AI systems, this is the recording or inventory of all AI systems that are relevant to the company. The AI Regulation defines an AI system in Article 3(1) as: “a machine-based system that is designed to operate with varying degrees of autonomy, that can be adaptive once operational, and that derives from the inputs received for explicit or implicit goals how to produce outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”
To clarify this definition, the European Commission has published guidelines that set out the following seven criteria:
- machine-based system: it must be a system that is based on hardware and software and operates under computer control.
- degree of autonomy: the system must be able to demonstrate a certain degree of independence from human control and intervention
- adaptability (optional): The system may be capable of learning and changing its behavior once it is operational; however, this is not a mandatory requirement.
- objectives of the AI system: The system operates on the basis of explicit or implicit objectives.
- ability to draw conclusions (inference): A key feature is the ability to independently draw conclusions from input data in order to generate outputs.
- generation of specific outputs: The typical outputs are predictions, content (e.g. text, image, video), recommendations or decisions.
- interaction with the environment: The generated outputs must be able to influence physical or virtual environments.
2. risk classification in AI risk management:the decisive step
In the second step of AI risk management, AI systems are categorised into the four risk classes of the AI Regulation. This step is crucial because it determines the scope of regulatory obligations. Without a structured risk assessment, compliant AI risk management is not possible. The AI Regulation distinguishes between the following risk categories:
- Prohibited are AI systems that are used to operate prohibited practices in accordance with Art. 5 of the AI Regulation that fundamentally violate fundamental rights and values, such as manipulative techniques, social scoring by authorities or certain biometric real-time remote identification. The details can be found in Art. 5 of the AI Regulation.
- High-risk AI systems pursuant to Art. 6 of the AI Regulation are AI systems that pose a high risk to health, safety or fundamental rights. On the one hand, this includes AI as a safety component of a product that falls under one of the EU product safety regulations listed in Annex I of the AI Regulation (e.g. Machinery Directive, Medical Devices Regulation) in regulated products (e.g. medical devices). On the other hand, AI systems that are intended to be used in one of the areas listed in Annex III of the AI Regulation are covered. This includes use in medicine, critical infrastructures or human resources. Extensive obligations apply to these systems, particularly with regard to risk management.
- AI systems with limited risk in accordance with Art. 52 of the AI Regulation are systems with a general purpose, but which harbour specific risks, such as the risk of manipulation or deception. This includes chatbots, emotion recognition systems and deepfakes. Here, transparency obligations must primarily be fulfilled, e.g. informing users about the interaction with an AI or labelling AI-generated content.
- AI with low or no risk are AI systems that do not fall into the first three risk categories. It is the residual category for systems with low or no risk. The AI Regulation does not provide any specific obligations for this, but other legislation (e.g. GDPR) must be observed.
3. Measures in AI risk management to reduce risk
To avoid AI risks and achieve AI compliance, ‘only’ the measures of the AI Regulation and, if applicable, other laws need to be implemented. With a good risk classification, the ball is literally on the penalty spot. Specifically, the following measures need to be taken for the AI systems in the corresponding risk classes:
Avoiding prohibited practices requires companies to develop and implement internal AI guidelines with requirements and prohibitions for employees so that prohibited practices are also recognised and refrained from by them. The principle applies that one can only hope for compliance with requirements if these are communicated and known. However, this is of course not enough. Employees must also really understand what this means for their daily work. This requires employee training and the promotion of AI expertise.
For high-risk AI systems, there is an extensive catalogue of requirements that must be met. The most important are:
- AI risk management in the narrower sense in accordance with Art. 9 of the AI Regulation, i.e. the establishment, application, documentation and maintenance of a continuous, iterative risk management process over the entire life cycle of the AI system. This includes identifying, analysing, assessing and mitigating risks as well as continuously reviewing and updating them.
- Data and data governance (Art. 10 AI Regulation): It must be ensured that the data sets used for the training, validation and testing of the AI system are of high quality (i.e. relevant, representative, error-free and complete) in order to minimise risks and discriminatory results.
- Technical documentation (Art. 11 AI Regulation in conjunction with Annex IV): Providers must prepare comprehensive technical documentation before placing a high-risk AI system on the market or putting it into operation and keep it up to date. Annex IV of the AI Regulation lists the minimum content in detail. Simplifications are provided for small and medium-sized enterprises (SMEs) when preparing the technical documentation.
- Recording obligations (Art. 12 AI Regulation): High-risk AI systems must have functions for the automatic logging of events (‘logs’) during their operation. These logs are intended to ensure the traceability of results and serve to monitor and clarify incidents. As a rule, operators must keep these logs for at least six months.
- Transparency and provision of information for operators (Art. 13 AI Regulation): High-risk AI systems must be designed and provided with such information (in particular comprehensive instructions for use) that operators can understand how the system works, interpret its results appropriately and use it as intended.
- Human oversight (Art. 14 AI Regulation): High-risk AI systems must be designed and developed in such a way that effective human supervision is possible during operation. Operators are obliged to deploy appropriately qualified persons for this supervision.
- Accuracy, robustness and cybersecurity (Art. 15 AI Regulation): High-risk AI systems must achieve and maintain an adequate level of accuracy, robustness against errors or inconsistencies and cybersecurity throughout their lifecycle.
- Quality management system (Art. 17 AI Regulation): Providers must establish, document, apply and maintain a quality management system that ensures compliance with the AI Regulation.
- Conformity assessment procedure (Art. 43 CI Regulation): A conformity assessment procedure must be carried out before placing on the market or putting into service. The AI Regulation provides for different procedures depending on the type of high-risk AI system and the application of harmonised standards.
- Registration (Art. 49 of the AI Regulation): Providers must register themselves and their high-risk AI systems in a database to be set up by the EU Commission before they are placed on the market or put into operation.
- Obligations for operators (Art. 26 AI Regulation): Operators of high-risk AI systems must use them in accordance with the instructions for use, ensure the quality of the input data, monitor operation, ensure appropriate human supervision, keep logs and fulfil information obligations.
- Fundamental rights impact assessment (Art. 27 AI Regulation): Certain operators, in particular bodies governed by public law or private organisations providing public services, must carry out an impact assessment of the effects on fundamental rights before certain high-risk AI systems are put into operation for the first time.
For AI systems with limited risk, the specific transparency obligations under Art. 52 of the AI Regulation must be fulfilled. In particular, this requires informing users that they are interacting with AI and labelling AI-generated content as such.
There are no specific obligations under the AI Regulation for AI systems with low or no risk. Nevertheless, other legal provisions (e.g. the GDPR when processing personal data) must be observed.
4. Practical recommendations for companies
The AI Regulation undoubtedly presents companies with new and complex challenges. The need to carefully classify AI systems and to fulfil extensive technical, organisational and documentary obligations, especially in the high-risk area. However, these challenges can be mastered with the approach of classic compliance and risk management measures. As always, even the longest journey begins with the first step. AI risk management should therefore be addressed at an early stage, AI expertise should be built up within the organisation and responsibilities and processes for AI risk management should be defined.
Do you want to avoid liability and fine risks in data protection? // PRIMA offers templates and instructions created by experts that make it easier to create GDPR-compliant documentation. Test now for 14 days free of charge and implement data protection professionally! |