The Role of AI Audits and Compliance Checks Under the EU AI Act

By Mark Kelly


The Role of AI Audits and Compliance Checks Under the EU AI Act

In March 2024, the European Parliament formally approved the EU AI Act, a comprehensive set of regulations that seeks to govern the development and deployment of artificial intelligence (AI) within the European Union. This groundbreaking legislation spans 458 pages and touches on everything from banned uses of AI to stringent requirements for high-risk applications. In this blog post, we will delve into the specific aspects of AI audits and compliance checks under this new regulatory framework, exploring how they aim to ensure safety, transparency, and adherence to ethical standards.


The EU AI Act represents a significant step forward in the legal governance of AI technologies. It establishes a legal framework intended to protect fundamental human rights and safety while fostering innovation and the responsible deployment of AI systems. One of the key components of this Act is its detailed approach to AI audits and compliance checks, especially for high-risk AI applications. Let’s explore what this entails and how it impacts organisations across the EU.

Understanding the Tiered System for Regulatory Requirements

Categorisation of AI Applications

The EU AI Act introduces a tiered system of regulation that categorises AI systems based on their potential risk to society:

– Low or minimal risk AI systems** are largely unregulated, allowing for flexibility and innovation in less critical applications.
– High-risk AI systems**, such as AI used in healthcare, financial services, and public surveillance, must adhere to stringent regulations. These include robustness, accuracy, and ensuring human oversight.

Focus on High-Risk Applications

Certain AI applications have been singled out as high risk due to their significant implications on individual rights and societal norms:

AI credit scoring and insurance pricing: These systems are considered high risk as they can potentially discriminate unfairly against individuals or groups.
Compliance Requirements: High-risk systems must not only be robust and accurate but also transparent in their operations, allowing for audits and compliance checks to be effectively carried out.

Supervision and Compliance: Ensuring Rigorous Oversight

Institutional Architecture for AI Oversight

Under the AI Act, a comprehensive oversight mechanism is established:

National AI Authorities: These bodies are responsible for overseeing the implementation of the Act, focusing on high-risk applications outside the financial sector.
Financial Sector Oversight: In the financial services industry, existing national financial supervisors may be tasked with overseeing AI applications, leveraging existing frameworks of model risk management and governance.

Coordination Among Authorities

Effective coordination is crucial to prevent overlapping regulations and to ensure a harmonised approach across Europe. This includes:

– Aligning the supervisory activities of national and European bodies.
– Ensuring that AI systems are compliant with both sector-specific and general AI regulations.

The Challenge of Model Risk in the AI Landscape

Evolving Risk Management

The use of AI in critical sectors like banking necessitates a reevaluation of traditional risk management practices. AI introduces complexities that traditional models did not account for, requiring an updated approach that includes:

– Enhanced scrutiny of AI models to ensure they do not perpetuate biases or lead to unintended consequences.
– Coordinated regulatory efforts to maintain consistency and comprehensiveness in AI audits and compliance checks.

Harmonisation of Supervisory Practices

Achieving a balanced and consistent regulatory environment requires:

– Close collaboration between various regulatory bodies across Europe.
– Development of shared standards and practices for AI risk assessment and mitigation.

Transparency and Rulemaking: Navigating the New Regulatory Landscape

Transparency Requirements

All AI systems, especially those categorised as high risk, must adhere to stringent transparency requirements. These include:

– Detailed documentation of AI operations and data usage.
– Clear labelling of AI-generated outputs, such as deepfakes.

Preparing for Future Regulations

While the AI Act sets a robust framework, many details of the regulation are still under development and will be rolled out gradually. Organisations must:

– Stay informed about upcoming regulatory changes and deadlines.
– Begin preparing now by auditing their AI systems and aligning their practices with expected requirements.

Challenges for Organisations

Complying with the new AI Act may pose significant challenges for some organisations, particularly those that:

– Have not previously maintained detailed records of their AI systems.
– Use AI models that are inherently less transparent, such as deep neural networks.

Organisations will need to invest in better data management and possibly redesign some AI systems to meet new standards of interpretability and compliance.


The EU AI Act is a landmark regulation that sets the stage for the future of AI governance in Europe. By instituting rigorous audits and compliance checks, the Act aims to ensure that AI technologies are used in a manner that respects human rights and safety while fostering innovation and technological advancement. As we move forward, the effective implementation of this Act will require concerted efforts from all stakeholders involved, from regulators to the organisations deploying AI systems. The path ahead is complex but necessary for creating a balanced ecosystem where AI can thrive responsibly and ethically.

If you are interested to find out more about the EU AI Act. Check out the comprehensive EU AI Act Online course, which can be found here.

Are you ready to dive deep into the transformative world of AI regulation with an expert who can demystify complex topics and bring them to life?

Booking Mark Kelly AI for your next event is your chance to explore the intricate details of the EU AI Act alongside the GDPR, guided by a seasoned expert in digital regulation.

Mark’s engaging talks not only clarify these critical frameworks but also illustrate their profound implications for businesses across sectors.
Enhance your organisation’s understanding and preparedness for the changing digital landscape.
Invite Mark Kelly AI to speak at your next event and empower your team to lead in compliance, innovation, and ethical practices in the AI-dominated future.