top of page

Pioneering Artificial Intelligence (AI): An Expansive Framework for Europe's Technological Renaissance

Introduction


AI and real person are talking

In line with its digital strategy, the European Union (EU) is embarking on a journey to regulate artificial intelligence (AI) for the betterment of society. Acknowledging AI's transformative potential across sectors such as transportation, finance, health, and energy, the EU aims to strike a balance between fostering innovation and safeguarding societal well-being.

European Parliament's top priority is to ensure that AI systems deployed in the EU adhere to stringent safety, transparency, and environmental standards. Human oversight is deemed essential to prevent detrimental outcomes, and the European Parliament seeks a technology-neutral, uniform definition of AI to guide future regulations.


In April 2021, the European Commission proposed the EU's first regulatory framework for AI, setting the stage for a comprehensive approach to AI governance.


On March 13, 2024, the European Parliament adopted (first reading) the Artificial Intelligence Act (AI Act). The AI Act is assumed to be the world's first comprehensive horizontal legal framework for AI. It is expected to provide EU-wide rules on data quality, transparency, human oversight, and accountability.


Extraterritorial Scope of AI Act

The AI Act represents a significant step in EU regulation, extending its reach beyond EU borders to ensure the effective governance of artificial intelligence (AI) within the Union.

Unlike typical EU regulations, which primarily affect entities within the Union, the AI Act applies to all providers and users of AI systems, irrespective of their location, as long as their outputs are utilized within the EU.

Its extraterritorial scope underscores the EU's commitment to upholding its policies, objectives, and internal market integrity in the realm of AI. To enforce compliance from non-EU entities, the AI Act mandates that third-country providers of AI systems appoint an authorized representative within the Union, allowing European authorities to exercise supervisory powers over them.


How will the AI Act impact Financial Sector

The AI Act, a broad-reaching legislation, currently offers a limited focus on AI tools within the financial sector. Explicit references within the AI Act relate primarily to credit scoring models and risk assessment tools in insurance.

AI systems used for credit evaluation or risk assessment in insurance, critical for individuals' financial access and well-being, are likely to be classified as high-risk due to potential life-altering consequences if improperly designed. However, the European Parliament suggests exempting AI systems detecting fraud in financial services from high-risk classification.

The AI Act directs financial institutions to comply with certain requirements by adhering to financial regulation standards to avoid redundancy with existing financial regulations. Despite this, financial institutions must adhere to the AI Act, especially when utilizing high-risk AI systems like credit scoring. Institutions should monitor developments closely as the list of high-risk AI systems evolves. With the rise of general-purpose AI in finance, institutions must navigate regulatory landscapes like the Digital Operational Resilience Act (DORA), considering interactions with the AI Act's obligations.

Supervisory authorities will integrate AI Act compliance checks into existing financial oversight practices, with the European Central Bank overseeing risk management for credit institutions. Additionally, the AI Act mandates the establishment of the European Artificial Intelligence Office, which is tasked with harmonizing AI Act implementation and advocating for the AI ecosystem's interests.

 

Risk Based Approach & Bans

The EU's AI regulatory framework classifies AI systems based on their potential risks to users. Unacceptable risks, such as cognitive manipulation and biometric identification, are outright banned. Meanwhile, high-risk AI systems, which could compromise safety or fundamental rights, undergo thorough assessment and oversight. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.


People walking through the street

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.


Transparency and Accountability Measures 

Transparency lies at the core of Europe's AI regulation, ensuring users are aware of AI-generated content's nature and origin. Generative AI models, like ChatGPT, must meet transparency requirements and comply with copyright laws. High-impact AI models, such as GPT-4, undergo comprehensive evaluations, with incidents reported to the European Commission to ensure accountability and mitigate systemic risks.


Supporting Innovation and SMEs

Robot at workplace in new startups

Europe's regulatory framework aims to foster innovation, particularly among startups and small to medium-sized enterprises (SMEs). National authorities are tasked with providing conducive testing environments, enabling these entities to develop and train AI models effectively before market release.


Timeline for Compliance

The AI Act will be implemented 24-months post its application. It seems that there will be also an exemption for certain high-risk AI systems that are already placed on the market or put into service before this date.


The AI Act will apply to such systems only if, from that date, those systems are subject to substantial modifications in their design or intended purpose.

Penalties

 

The new law imposes huge fines for those breaching its requirements — for the worst offenders, fines can reach up to EUR30 million or 6% of the companies’ total worldwide annual turnover for the preceding financial year, whichever is higher.


Conclusion 

The AI regulatory framework aims for a balanced approach that fosters innovation while safeguarding societal values and fundamental rights. By prioritising safety, transparency, and accountability, European legislators aim to cultivate an AI ecosystem that promotes responsible development and utilization of AI technologies. As regulations take effect and evolve over time, European legislators are poised to lead the global conversation on ethical AI governance, paving the way for a digitally progressive and socially responsible future.



 

The content provided in this article, as well as in all associated publications of the Cyprus Compliance Association and any of its authors, is for informational purposes only and is not intended as legal, financial, or professional advice. We make every effort to ensure the accuracy and reliability of the information provided but do not guarantee its correctness, completeness, or applicability to specific circumstances. We encourage readers to consult with qualified professionals before making any decision based on the information provided here. The Cyprus Compliance Association accepts no liability for any loss or damage, howsoever arising as a result of use or reliance on this information.

Comments


Commenting has been turned off.
bottom of page