Regulating AI in the EU: The Impact of Intended Purpose Under the AI Act
Explore how intended purpose drives AI regulation, risk, and liability under the EU AI Act 2023.
ETHICS AND LAW
Obele Tom-George Akinniranye
3/12/20255 min read


The emergence of the European Union (EU) Artificial Intelligence Act (EU AI Act 2023) signifies a notable milestone in AI regulation by introducing the intended purpose as a foundational concept for defining the scope, accountability, and regulatory compliance obligations of AI systems. This concept, outlined in Article 3(12) of the Act, refers to the specific objectives for deploying AI systems, their functionalities, and the contexts in which they are designed, developed, and used. By anchoring the liability procedure and regulatory compliance obligations to the intended purpose, the Act provides a comprehensive and robust legal regulatory framework aimed at regulating AI systems based on the mitigation of risk levels associated with misuse, unauthorized deployment, or adaptive behaviours of AI systems. The Act is the first major regulatory attempt globally to establish clear legal standards for AI and its enforcement mechanisms adopt the Brussels Effect. The EU AI Act is a significant regulation aimed at addressing the benefits and risks associated with AI systems. The Act protects health, the environment, safety, fundamental human rights, democratic systems, and the rule of law. The Act applies to both public and private entities within and outside the EU, as long as the AI system impacts people in the EU. This blog critically scrutinizes the impact of this principle with particular emphasis on AI deployment, its intersectionality with other legal frameworks, and the challenges it presents. This is particularly relevant for general-purpose AI systems (GPAI) that may evolve beyond their initially intended design.
What is the Role of Intended Purpose in the EU AI Act?
Conceptually, the introduction of ‘intended purpose’ in the act serves as a regulatory anchor in the implementation of several key areas of the EU AI Act. The role of intended purpose under the act is discussed extensively under the following categorization, Risk Classification and Compliance Requirements, Conformity Assessments and Ongoing Compliance and lastly Liability and Accountability Mechanisms.
Under the EU AI Act, AI systems are further categorized into four risk levels: unacceptable, high-risk, limited-risk, and minimal-risk systems. These risk categories use the principle of intended purpose as a key deployment criterion. High-risk AI systems such as AI in healthcare, law enforcement, and critical infrastructure, are subject to stringent regulatory requirements such as conformity assessments and post-market surveillance. Article 6 of the AI Act links an AI system’s risk classification to its intended purpose, ensuring regulatory scrutiny aligns with its anticipated function and impact. This means that the system is designed to function in a particular way and has an expected impact. This ensures that AI regulation is proportionate to the risks associated with the specific use case. An example would be the deployment of an AI tool in healthcare. The tool would have an intended purpose which includes and is not limited to assisting in diagnosing cancer based on medical scans. Such AI tools in healthcare are classified as High-Risk AI as they deal directly with human beings and the impact of an incorrect diagnosis could cause harm to a human in breach of the First Law of Robotics according to Isaac Asimov’s Three Laws of Robotics. High-risk AI systems are subject to strict compliance requirements (e.g., transparency, human oversight, accuracy testing) and regulatory oversight to ensure rigorous testing, explainability, and accountability before deployment.
AI deployers must justify their systems’ intended purposes through documentation, testing, and the adoption of compliance measures. Secondly, deployers must ensure that all AI systems undergo a conformity assessment to verify alignment with their intended purpose before deployment and continuously implement regulatory compliance mechanisms in line with EU safety and fundamental rights standards. Article 9 requires AI providers to implement a risk management system that continuously evaluates potential deviations from the system’s intended purpose. Post-market monitoring is a mandatory requirement to aid the detection of adaptive behavioural patterns or modifications that could create new risks outside the system’s original design.
The Liability and Accountability Mechanisms are designed to be proactive. Liability in AI systems is not merely retrospective (i.e., determined after harm occurs) but is embedded in AI design to prevent risk throughout the AI product lifecycle i.e. from conception to deployment and operationalization. An example would be an AI-powered autonomous vehicle that is pre-programmed with real-time hazard detection to reduce the risk of accidents before deployment. The approach adopted aligns with emerging legal frameworks, such as the EU AI Act, the Product Liability Directive (PLD), and sectoral AI safety regulations. These legal frameworks, by their provisions, reinforce the need for clear accountability frameworks based on intended purpose. Providers and deployers are deemed liable if an AI system causes harm due to non-adherence to its intended function. Alternatively, users who repurpose AI systems beyond their defined scope may be classified as de facto providers and assume legal responsibility for resulting harm.
Navigating Intended Use in AI: The Regulatory Challenge of General-Purpose Models
Several challenges arise in defining ‘intended purpose’ for General-Purpose AI (GPAI) systems, which are designed for broad applications and are inherently adaptable. A major challenge in General-Purpose AI (GPAI) is the lack of clear boundaries between providers and users. In an instance where GPAI systems are repurposed, modified, or fine-tuned by third parties, it raises the question, who bears liability for unintended outcomes? Where a system is retrained, modified or adapted for new applications beyond the original purpose. Should liability shift to the deployer? Where the AI systems evolve autonomously, who ensures regulatory compliance? The second major challenge would be the risk of Unintended or Adaptive Behaviours. AI systems based on machine learning can self-improve or adapt in ways unforeseen by their programmers or developers. Where an AI system’s behaviour deviates from its original intended purpose, it could result in safety breaches or regulatory non-compliance. In an instance where an AI recruitment system that has been trained specifically for corporate recruitment is inadvertently used for government security clearances. This raises ethical and legal concerns with the recommendation of a mandatory post market integrative system that highlights deviations and triggers regulatory reassessments.
Regulatory gaps and recommendations
The EU AI Act leaves regulatory gaps in AI governance, including the lack of clear guidelines for adapting intended purpose, insufficient post-market oversight, undeveloped enforcement mechanisms, and failure to address the complexity of GPAI and open-source AI governance, where modifications occur beyond the original provider’s control.
Proposed solutions include mandatory reporting via regulatory notification and reassessment for AI modifications. This would require the reporting of any significant change in an AI system’s intended purpose. Secondly, the incorporation of risk-based periodic post-market audits for all AI systems to facilitate ongoing compliance. Finally, establishing EU legal frameworks for General-Purpose AI with specific, fit-for-purpose liability rules to clarify responsibilities among developers, deployers, and end-users. This approach seeks to enhance AI oversight by refinement of extant compliance guidelines and anticipating challenges in adaptability and liability. Strengthening this approach will ensure the integrity, transparency, accountability, and ethical deployment of AI in a rapidly evolving digital landscape.
Words by: Obele Tom-George Akinniranye
PhD researcher Robotics Law
National University of Ireland, Maynooth
Law Tutor, Legal Technology 1 and 2, Technology University of the Shannon, Athlone


AI Afrique
Connecting AI experts with mentees, and supporting peer collaboration.
info@aiafrique.org
© 2024. AI Afrique. All rights reserved.