Authored by Attorney-at-law Mathias Bartholdy
The regulation is intended to address the risks associated with the use of AI. These risks include the use of AI in tracing and analysing individuals’ patterns of behaviour and interests (profiling). An area that is already subject to restrictions under the data protection rules.
The regulation supplements the data protection rules by ensuring that personal data is adequately protected in connection with the use of AI-based products and services.
Learn more about the four key points from the regulation below.
Broad definition of AI systems
An AI system must be used for the special requirements of the regulation to be applicable. The European Commission proposes a relatively broad definition of AI systems, although this was highlighted as a specific risk during the consultation. The definition includes, for example, “expert systems”, characterising some of the first types of “artificial intelligence” software dating back to the late 1950s(!). In popular terms, the concept is a term for a piece of software that can imitate human decision-making by way of if-then rules (if x occurs, then y will). Such rules are the cornerstones of largely all modern programming languages.
Statistical approaches and search and optimisation methods are also included. Again, concepts that seem relatively broad.
This makes it difficult for companies to decide whether software is “merely” software or considered to be AI software comprised by the rules. For now, we can only hope that, during the coming negotiations, the European Parliament and the Council will make concerted efforts to define the concept more precisely to provide transparency for the companies that will and will not be affected.
High-risk AI systems to be subject to strict obligations
The majority of the obligations in the draft regulation concern the so-called high-risk AI systems. This category comprises AI systems for e.g. recruitment procedures, creditworthiness assessments, supervision of employees and allocation/deprivation of public benefits.
Systems falling under the scope of this category must be registered and bear the CE marking. Thus, providers of such systems should expect being required to meet a number of obligations before they can even enter the market. These obligations also comprise the adoption of risk management measures, preparation of detailed documentation on the system and its purpose as well as concise and clear user information.
See also the European Commission’s overview of the proposal’s implications for providers of high-risk AI systems:
New highs for penalty levels
The Commission proposes that non-compliance with the data governance requirements for high-risk AI systems should be subject to a penalty. The requirements are intended to ensure the quality of the data used in high-risk AI systems. They include appropriate data governance and management practices with high standards for training, validation and testing of the data sets used. Non-compliance with the requirements may be subject to a fine of up to 6 per cent of the company’s worldwide annual turnover. For comparison, the highest penalty level under the GDPR is 4 per cent of a company’s worldwide annual turnover.
Additional measures for small-scale providers and start-ups
The European Commission has focused particularly on measures to reduce the regulatory burden on small-scale providers and start-ups in the regulation. As an example, they can look forward to so-called “AI regulatory sandboxes” intended to support responsible innovation within AI.
These are just some of the topics contained in the coming regulation. Learn more at European Commission’s website
An effective date for the regulation remains to be set, and it will have to take its course in the European Parliament and the Council before a final regulation text is ready. However, there is little doubt that it will be adopted in one form or another. At Patrade, we will keep a close eye on the progress of the regulation in Brussels.