Monday, February 2, 2026

EU compliance: An opportunity to strengthen AI strategies

Ana Paula Assis of IBM casts an eye over the EU AI Act, outlining the critical steps for compliance and the advantages it will bring

The EU AI Act has ushered in a new age for artificial intelligence in Europe. As the world’s first comprehensive legal framework for AI, the act has been developed to ensure the safety and transparency of AI systems across the EU. This new framework will be hugely beneficial for businesses, bringing certainty and clear guardrails for AI strategies for the first time. Creating a roadmap towards compliance should be a top priority for businesses in 2024. 

A risk-based approach

Understanding the Act’s risk-based approach is the first stage in the journey to compliance. This approach, intended to promote safe AI usage while encouraging innovation, requires AI systems to be classified and regulated based on the risk they pose. Risk levels span from “unacceptable” practices, including social scoring, to “high-risk” areas like credit scoring, “limited risk” use like chatbots, and “minimal risk” applications, such as spam filters. Generative AI will not be classified as high-risk, but its usage must comply with transparency requirements and EU copyright law, including AI-generated content disclosure and summaries of copyrighted training data. The EU has adopted a phased implementation approach, with prohibitions taking effect in six months and most provisions applicable in two years. 

With set timelines in place, organisations should start preparing to meet the standards as soon as possible. The first step is completing a comprehensive model inventory. Performing a thorough inventory of all your business’s AI and ML applications will provide you with a clear view of your data and applications subject to the act. 

The second step is undertaking a comprehensive risk assessment to ensure all relevant obligations are fulfilled. For example, high-risk AI use cases come with seven essential requirements related to human oversight, technical robustness, privacy and data governance, transparency, fairness, and accountability. A complete risk assessment will also consider reputational and operational risks. 

The third step is implementing technical standards. European standardisation organisations are developing new technical standards to facilitate this, as has the International Organisation for Standardisation (ISO), which recently released a new standard called ISO42001, to be adopted by the EU as a framework for risk management systems. 

Building a responsible, ethical ecosystem 

Organisations should view the compliance process as an opportunity to strengthen AI governance strategies. By building a framework for responsible, governed AI, businesses can confidently operate, manage risk and reputation, and build trust among employees and stakeholders. 

Establishing an AI ethics board is paramount in building this ecosystem. Compliance with the Act demands a certain level of ethical consideration, but companies must also define their ethical approach to AI and establish guidelines to direct implementation and future innovation. 

While there will still be challenges ahead – such as a steep learning curve for less regulated sectors or the emergence of diverging global frameworks – the EU AI Act signifies an important step for the future of responsible AI and one that promises to bring increased competitiveness and innovation to Europe and beyond. 

About The Author

Ana Paula Assis is Chair and General Manager, EMEA, IBM.   

Latest

Why data is crucial to the FSCS changes

As the pace of regulatory change increases, institutions that...

Replace fear of failure with the thrill of the breakthrough

If digital transformation is to succeed, then psychological safety...

Clearing the ultimate obstacles for AI

Dominic Wellington of SnapLogic warns of an “orchestration” wall...

By controlling your decisions, you’ll control your outcomes

Fay Niewiadomski explores how to recognise and pre-empt the...

Subscribe To Our Content

Don't miss

Why data is crucial to the FSCS changes

As the pace of regulatory change increases, institutions that...

Replace fear of failure with the thrill of the breakthrough

If digital transformation is to succeed, then psychological safety...

Clearing the ultimate obstacles for AI

Dominic Wellington of SnapLogic warns of an “orchestration” wall...

By controlling your decisions, you’ll control your outcomes

Fay Niewiadomski explores how to recognise and pre-empt the...

Why luxury chalet owners are losing faith in the management model

Founder of MBM Chalets Matthew Burnford explores how, without...

Why data is crucial to the FSCS changes

As the pace of regulatory change increases, institutions that invest in continuous data readiness will be best placed to protect customers, support financial stability,...

Replace fear of failure with the thrill of the breakthrough

If digital transformation is to succeed, then psychological safety is a non-negotiable, says change management expert Bontle Senne The path to digital transformation is paved...

Clearing the ultimate obstacles for AI

Dominic Wellington of SnapLogic warns of an “orchestration” wall that could lead to AI becoming yet another expensive, ungoverned silo, costing leaders millions in...

LEAVE A REPLY

Please enter your comment!
Please enter your name here