Discussion - Ethical and trustworthy Artificial Intelligence

The application of the AIA is expected to have a global effect as it has a horizontal applicability across industries, it covers all providers and users where the output of the AI system is used in the EU, and standardization creates harmonization increasing business opportunities for large markets such as the EU.

Although the majority of industries do not fall directly under the requirements of the proposed AIA, a domino effect is expected with the introduction of requirements for General Purpose AI Systems. This will further increase the regulatory burden for companies that do not operate under a strict regulatory framework and their current operation does not include oversight by auditing authorities.

Requirements set by the AIA are high-level, to be refined by Harmonized Standards and Common Specifications. Some of the requirements, such as human oversight, might be difficult to implement in high volume/speed applications. The AIA mitigates this by including wording that suggests human oversight will be an obligation assigned to professional rather than lay users of AI.

The interplay between AI standards and standards applicable to other sectorial Union law has been clarified for a small number of standards. As an example, how will the Medical Devices ISO 14971 Risk Management (RM) standard work with ISO/IEC 23894 AI RM? Both the AIA (Council, Nov 2022) as well as the draft request for standardization (EC, Draft standardisation request to the European Standardisation Organisations in support of safe and trustworthy artificial intelligence, 2022) address this question by stating that aspects of AI will be intergraded into procedures/system applicable to sectorial Union law.

Incident reporting (Art. 62) requires providers of High-Risk AI systems to report to the market surveillance authorities. However, it is clarified that for other regulations reporting is also applicable, such as MDR 2017/745 and IVDR 2017/746. Notification according to AIA will be limited to serious incidents relevant to breach of obligations to protect fundamental rights (Art. 3, Sec 44c).

The definition between those standards of an incident is not the same, which further complicates the reporting process:

  • MDR Art. 2(58): “Serious adverse event” means any adverse event that led to any of the following:

a. Death

b. Serious deterioration in the health of the subject, that resulted in any of the following:

  • Life-threatening illness or injury
  • Permanent impairment of a body structure or a body function
  • Hospitalisation or prolongation of patient hospitalisation
  • Medical or surgical intervention to prevent life-threatening illness or injury or permanent impairment to a body structure or a body function
  • Chronic disease

c. Foetal distress, foetal death or a congenital physical or mental impairment or birth defect

  • AIA Art. 3(44): “Serious incident” means any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:

a. The death of a person or serious damage to a person’s health
b. A serious and irreversible disruption of the management and operation of critical infrastructure
c. Breach of obligations under Union law intended to protect fundamental rights
d. Serious damage to property or the environment

Sandboxes is another concept introduced by the AIA. Although sandboxes are meant to foster innovation, and specific actions are to be taken for SMEs and start-ups, it is not clear what their role will be in the assessment process. Further clarifications are required on the role of sandboxes in the assessment process.

Predetermined changes apply to evolving AI. However, there is no definition on what predetermined changes are, or what should be included in the initial assessment process and monitored in the post market surveillance phase for this purpose.

Although there are other areas in the AIA that require further clarification, it is not the intent of this paper to shortlist all the areas, but rather to provide a brief introduction to the reader of the AIA. It is clear that the AIA is a necessary development as remaining idle in an era where AI is already in place is not an option. Over time, explanatory and supplementary documentation will need to be developed to provide additional clarity to support interpretation of the legal text.

The horizontal approach of the AIA is under global debate, as other legislators believe a vertical approach, amending existing industry specific legislation, is more appropriate because it takes into consideration context specific AI risks and requirements. Once the EU adopts AIA, it will differentiate conformity assessment process to other countries, like the UK. Having a global process for assessing AI would be beneficial to AI providers, as a single application would cover multiple jurisdictions. There have been initial actions taken to this direction, however, such divergence in approaches already exists under most other product legislation.

This blog post is an excerpt from our updated whitepaper: Ethical and trustworthy Artificial Intelligence. Please download the full whitepaper to find out more information.

The Compliance Navigator blog is issued for information only. It does not constitute an official or agreed position of BSI Standards Ltd or of the BSI Notified Body.  The views expressed are entirely those of the authors.