VII Table of Contents
- Preface
- I. On the Scope
- II. Design
- III. Development
- 1. AI System Requirements
- a. Quality Management System
- b. Risk Management
- c. Human Oversight
- d. Accuracy, Robustness, and Cybersecurity of AI Systems (Article 15 AI Act)
- (1) Overview
- (2) Detailed Explanations
- 2. Overview of Data Governance & Data Management (Article 10)
- a. Machine Learning and Training Data in a Nutshell
- b. Mandatory Quality Criteria for Training, Validation, and Testing of AI Systems, Article 10(1)
- c. Data Governance & Data Management, Article 10(2)
- d. Standardization of Data (Quality) Management
- e. Definitions and Metrics
- f. The Individual Elements
- (1) Relevant Design Choices, Article 10(2)(a)
- (2) Data Collection Processes and the Origin of Data, and in the Case of Personal Data, the Original Purpose of Data Collection, Article 10(2)(b)
- (3) Relevant Data-Preparation Processing Operations, such as Annotation, Labeling, Cleaning, Updating, Enrichment, and Aggregation, Article 10(2)(c)
- (4) Formulation of Assumptions, Particularly Regarding the Information that the Data are Intended to Measure and Represent, Article 10(2)(d)
- (5) Assessment of the Availability, Quantity, and Suitability of the Necessary Data Sets, Article 10(2)(e)
- (6) Examination of Possible Biases Affecting Health and Safety, Fundamental Rights, or Leading to Discrimination Prohibited Under Union Law Article 10(2)(f) and Appropriate Measures to Detect, Prevent, and Mitigate Possible Biases Identified According to Article 10(2)(g)
- (7) Identification of Relevant Data Gaps or Shortcomings Preventing Compliance and Appropriate Mitigation Measures, Article 10(2)(h)
- g. Combating Bias and Discrimination, Articles 10(3), (4), and (5)
- (1) Recitals and Ethics Guidelines for Trustworthy AI by the High-Level Expert Group on Artificial Intelligence (HLEG AI, 2019)
- (2) Fundamental Rights Agency (FRA), LIBE, HLEG AI and the Toronto Declaration
- (3) Research and Science: It’s Not Just the Data, Stupid!
- (4) International Standardization
- (5) Relevant, Sufficiently Representative, Free of Errors and Complete in view of the Intended Purpose
- (6) Balanced Statistical Characteristics in Datasets
- (7) Geographically, Contextually, Behaviorally, or Functionally Typical Datasets
- (8) Processing of Sensitive Data for the Analysis and Mitigation of Biases229
- (9) AI, Bias, and European Anti-Discrimination Law: An Overview239
- 3. Testing & Compliance
- 4. Technical Documentation
- 1. AI System Requirements
- IV. Deployment
- 1. Providers
- a. The Obvious
- b. Documentation Keeping (Article 18) and Automatically Generated Logs (Article 19)
- c. Risk Management
- d. Human Oversight
- e. Transparency and Provision of Information to Deployers and/or End Users
- f. Post-market Monitoring (Article 72), Corrective Action & Duty of Information (Article 20), Reporting Serious Incidents (Article 73) and Cooperation with the Authorities
- 2. Deployers
- a. The Obvious: Due Diligence, Use According to Instructions for Use (Logs, Human Oversight), Transparency, Monitoring and Duty of Information, Reporting
- b. Data Governance
- c. Fundamental Rights Impact Assessment (FRIA)
- d. Consulting the Works Council
- e. Data Protection Impact Assessment
- f. Article 25 Obligations Along the AI Value Chain313
- 1. Providers
- V. Special Considerations