For data pros, rules, governance, and compliance aren’t new, but now that the dust is settling around GDPR, there’s something new and equally big on the horizon. The European Union announced its draft Artificial Intelligence Act (EU AIA), spearheaded by Europe Fit for the Digital Age, in April 2021. The goal of the commission leading the charge is to “turn Europe into the global hub for trustworthy artificial intelligence (AI)” and, in partnership with member states, develop the first-ever legal AI framework.
On paper, it looks and sounds fantastic; think of it as a “Bill of Rights for AI”, a world where AI needs to be used and nurtured responsibly and safely, delivering only a trusted user experience. In practice, it’s going to require a lot of smart attention and dedicated effort from data experts. There’s a lot to wrap our heads around, and lots of prep for teams and systems to avoid the hefty fines non-compliance could impose.
What does it mean?
Before we can delve into the impact of the proposed legislation, it’s helpful to understand what the Act defines as AI. At this early stage of this draft Act, the current definition is pretty broad and includes machine learning approaches, logic- and knowledge-based approaches, expert systems, statistical approaches, Bayesian estimation, other statistical methods, and search and optimisation methods. It then stands to reason that all modeling performed by augmented analytics and data science will also fall under the Act.
In addition, not dissimilar to GDPR, it won’t matter if your company falls outside the EU’s borders. If you have a link to the EU, have customers, suppliers, staff in the EU, or even produce products for the EU, then you will have to comply. Non-compliance will include financial penalties for an organisation, and if you look closely at the draft, it speaks about fines of up to €30 million or 6% of your global annual turnover. For those companies looking to take a shortcut, don’t — the Act clearly states that if you provide the regulatory bodies with incorrect, incomplete, or misleading information, fines of €10 million or 2% of turnover will be instated.
Why now is the time to implement regulation of AI
Oded Karev, general manager of NICE Advanced Process Automation at NICE, discusses why now is the right time to implement AI regulation. Read here
Where and when will it start?
So, it would appear you have some time, but if GDPR readiness taught us anything, it’s not enough time to get comfortable. Officially, the Act has been assigned to the EU Parliament committee, to be followed by a review of the Act by the Council of Ministers. Only once it has been passed will the two-year implementation period start.
Importantly, the proposed Act is not being put in place to vilify AI. Instead, it aims to position Europe as an AI centre of excellence. In fact, it comes at a time when the world is starting to identify and take note of the potential of AI in markets as diverse as health, transport, energy, agriculture, tourism, and even cyber security. The Act will, therefore, focus on what is deemed high-risk AI systems.
The systems that will likely feel an immediate effect are part of a safety system, critical infrastructure, education and training, employee programs, financial services, law enforcement, border control, and justice, and are covered by the EU’s single market harmonisation legislation.
What can I do now?
Critically, it would be best if you started changing how you think about AI. If your business is undergoing any form of digital transformation or automation, the likelihood that AI is already in play is exceptionally high. An excellent place to start is by establishing an AI risk mitigation plan and being sure to get your executive leadership on board. AI is pervasive, and while your executive team may think they aren’t exactly Skynet and aren’t building robots to take over the world, they need to know how your organisation is engaging AI in its data and automation processes. “I was not aware” will not be looked upon with any compassion by the regulators.
Changing how you think about AI will also lead to uncovering where your business uses AI. Go through all your new, old, and planned systems, and don’t stop at what you can see. Ask your suppliers what they use, look at your cloud services, see what AI they deploy on your behalf, and then document it all. Only when you know what they are will you be able to make quantifiable decisions on how they align with the EU regulations and classifications. Systems that will warrant closer scrutiny will extend to those that include machine learning for pattern recognition, edge anomaly detection and root cause analysis, dynamic pricing, customer engagement, digital twins to improve yield, production surveillance and condition-based maintenance, and fraud and risk management.
Personally, I am all for creating a Decision Observer Team made up of different stakeholders, not just data scientists, that will then form part of an internal Artificial Intelligence Act steering committee. These observers can be assigned different tasks across various business areas where their findings are brought together to manage AI algorithmic risks as a group.
How does the Microsoft Office of Responsible AI ensure compliance?
Head of office of responsible AI at Microsoft, Natasha Crampton, spoke at Microsoft’s Data Science and Law Forum about what her department was doing to ensure AI compliance. Read here
Do it now
I can’t stress enough how important it is to get on top of this process now. One significant difference between Personally Identifiable Information (the subject of GDPR regulation) and AI, is that AI isn’t always visible. It’s often woven into the fabric of a system so profoundly that business users aren’t even aware of it – they simply love the visual analysis the systems serve them monthly or the fact that they know their customers prefer bottled water to sweetened fizzy drinks.
Lastly, stay informed and stay aware. The EU is curating a list for updates on the AIA, and when things start to move, they will move quickly.