Ana Paula Assis of IBM casts an eye over the EU AI Act, outlining the critical steps for compliance and the advantages it will bring
The EU AI Act has ushered in a new age for artificial intelligence in Europe. As the world’s first comprehensive legal framework for AI, the act has been developed to ensure the safety and transparency of AI systems across the EU. This new framework will be hugely beneficial for businesses, bringing certainty and clear guardrails for AI strategies for the first time. Creating a roadmap towards compliance should be a top priority for businesses in 2024.
A risk-based approach
Understanding the Act’s risk-based approach is the first stage in the journey to compliance. This approach, intended to promote safe AI usage while encouraging innovation, requires AI systems to be classified and regulated based on the risk they pose. Risk levels span from “unacceptable” practices, including social scoring, to “high-risk” areas like credit scoring, “limited risk” use like chatbots, and “minimal risk” applications, such as spam filters. Generative AI will not be classified as high-risk, but its usage must comply with transparency requirements and EU copyright law, including AI-generated content disclosure and summaries of copyrighted training data. The EU has adopted a phased implementation approach, with prohibitions taking effect in six months and most provisions applicable in two years.
With set timelines in place, organisations should start preparing to meet the standards as soon as possible. The first step is completing a comprehensive model inventory. Performing a thorough inventory of all your business’s AI and ML applications will provide you with a clear view of your data and applications subject to the act.
The second step is undertaking a comprehensive risk assessment to ensure all relevant obligations are fulfilled. For example, high-risk AI use cases come with seven essential requirements related to human oversight, technical robustness, privacy and data governance, transparency, fairness, and accountability. A complete risk assessment will also consider reputational and operational risks.
The third step is implementing technical standards. European standardisation organisations are developing new technical standards to facilitate this, as has the International Organisation for Standardisation (ISO), which recently released a new standard called ISO42001, to be adopted by the EU as a framework for risk management systems.
Building a responsible, ethical ecosystem
Organisations should view the compliance process as an opportunity to strengthen AI governance strategies. By building a framework for responsible, governed AI, businesses can confidently operate, manage risk and reputation, and build trust among employees and stakeholders.
Establishing an AI ethics board is paramount in building this ecosystem. Compliance with the Act demands a certain level of ethical consideration, but companies must also define their ethical approach to AI and establish guidelines to direct implementation and future innovation.
While there will still be challenges ahead – such as a steep learning curve for less regulated sectors or the emergence of diverging global frameworks – the EU AI Act signifies an important step for the future of responsible AI and one that promises to bring increased competitiveness and innovation to Europe and beyond.
About The Author
Ana Paula Assis is Chair and General Manager, EMEA, IBM.