Home Resources News

The EU Artificial Intelligence Act 2024: Supported by ISO Standards

06 August 2024

The Artificial Intelligence Act (Regulation (EU) 2024/1689), published on 12th July 2024, and effective 1st of August 2024, establishes harmonised EU rules for AI development and usage, emphasising transparency, accountability, and data protection, with strict obligations for high-risk AI applications to ensure safety and compliance with fundamental rights.

ISO-42001-e-learning-6.png
What is The Artificial Intelligence Act?

On the 12th of July 2024, the Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, also known as The Artificial Intelligence Act of the EU, was published in the Official Journal of the European Union and entered into force on the 1st of August 2024 (this is not a deadline).

This regulation introduces harmonised rules for AI development, market placement, and usage within the EU, adopting a proportionate risk-based approach. The Artificial Intelligence Act focuses on transparency, accountability, and data protection, ensuring that AI systems are auditable and trustworthy. Special attention is given to AI applications considered high-risk, where more stringent control measures are required. The Act imposes stringent obligations on AI providers to ensure safety and adherence to existing legislation protecting fundamental rights throughout the AI lifecycle. This regulation represents an essential advancement in the framework for AI governance.

Why was the regulation introduced?

The rapid advancement of Artificial Intelligence has brought about significant benefits but also considerable risks. The need for a comprehensive regulatory framework became apparent as AI technologies increasingly influence various aspects of life, from healthcare and finance to security and surveillance. The AI Act addresses these concerns by establishing a clear set of guidelines to ensure the ethical and safe deployment of AI systems.

What is the impact on businesses?

Not all companies are affected by this law in the same way. It primarily affects those working with AI systems in sectors such as health, transport, finance, and surveillance. These businesses must adhere to the highest standards of the law, due to the significant potential impact their technologies could have on security and people's rights.

Additionally, the regulation supports innovation by allowing businesses, including SMEs and startups, to develop, train, validate, and test AI systems within AI regulatory sandboxes. By 2nd August 2026, each EU Member State must establish at least one sandbox, which can be set up jointly with other Member States. These sandboxes proved a controlled environment for testing AI systems under regulatory supervision, facilitating innovation while ensuring adherence to regulatory standards.

Requirements for compliance

To comply with the AI Act businesses should focus on several important steps:

Risk Assessment: Companies should analyse how their AI systems could potentially cause harm and find ways to prevent these risks.

Transparency: They should document how their AI systems work and how they make decisions, so that it is clear to both regulators and the public.

Data Protection: Implement strong security measures to protect the personal data they handle.

Human Oversight: Ensure that there is always an adequate level of human oversight, to prevent systems from acting without adequate control.

Rights of Recourse: Provide people with a way to complain and obtain a remedy if they suffer harm due to an AI system.

Responding to Legal Requirements with ISO 42001:2023

ISO 42001:2023 is a standard that offers a structured approach to managing AI systems. How does it support with the AI Act’s compliance?

Governance and Structure: The standard helps to establish a clear structure within the company for managing AI, which facilitates compliance with the oversight and transparency requirements of the law.

Risk Management: Provides a systematic approach to identify and mitigate risks, essential for complying with risk assessments required by law for high-risk AI applications.

Data Protection: Emphasises data security and privacy, helping companies comply with data protection laws such as GDPR.

Continuous Improvement: Promotes constant review and improvement of AI practices, crucial for adapting to changes in law and technology.

Having a management system based on ISO 42001:2023 provides a robust and structured framework that not only helps companies comply with current AI laws but also prepares them to respond to future regulations and technological challenges.

Want to find out more about ISO 42001, talk to one of our friendly team.