Artificial Intelligence (AI) is a fascinating tool in the modern world. It can suggest products based on a person’s search history. It can recognize faces to unlock a device. It can help recruiters choose the best candidate to fill a position. It can break down datasets in a meaningful way for a case or investigation. And much more.
While Artificial intelligence has revolutionized many aspects of work and personal life, many have expressed concern about inherent bias. Humans have to train AI tools before deployment, which can create vulnerability to bias and bias. If that happens and no one ever disputes the technology, then it becomes difficult to explain the decisions and opens the door to reputational damage and legal liability.
While there has been patchwork regulation in countries like the United States and China, there are no general books laws. The EU has taken a revolutionary step through the Artificial Intelligence Act, which is currently in the middle of the negotiation process. It was introduced in April 2021 and has been going through the legislative process for the past two years.
Right now, the European Parliament is expected to vote in the spring and the law is expected to be approved this year. The UK has also released a position paper on AI taking a different approach. It is crucial to understand what these laws would change and start preparing for compliance, as this will set the stage for other countries to follow suit.
The EU Artificial Intelligence Act
The goal of the EU’s AI Act is to promote transparent and ethical use of AI while safeguarding data. While it’s not in final form yet, it’s nearing its end. Here are the main features of the bill:
- The definition of AI covers all software developed with machine learning, logic-based, knowledge-based and/or statistical approaches. Organizations developing or using such software in the EU would be subject to liability.
- AI tools will fall into four categories: unacceptable, high risk, low risk and minimal risk. Unacceptable systems such as social scoring used in public areas would be prohibited by law. The regulation it mainly focuses on those who fall into the high-risk category. This includes AI used for employment, law enforcement, education, biometric identification and more.
- AI vendors have the highest burden. Key obligations would include a preliminary conformity assessment before placing an instrument on the market; creation of a risk management system to identify biases during design, development and implementation spanning the entire usage lifecycle; cyber security requirements; record keeping; human control at every step; quality management; building robust AI governance frameworks; and public registration.
- The term AI users would include individuals and organizations who use AI under their authority over end users. Recruiting agencies are a prime example. Responsibilities include training with relevant data, monitoring, logging, data protection impact assessments and robust AI governance frameworks.
- The penalties are high and currently include up to 30 million euros or six percent of the offending organization’s global revenue.
As this continues to move through the process, it’s important to note any changes or additions. Lawmakers have expressed concern about how it will regulate biometrics and ensure the flexibility built into the law due to the dynamic nature of AI.
Proposed British approach
The UK policy paper on AI government and the rulebook came out last summer. It is also committed to promoting transparency, safety and fairness. However, it deviates from the EU regulation in many aspects by focusing more on innovation through a sectoral approach.
Put simply, while there would still be standards to follow, each agency would regulate the use of AI in their specific industry. This is designed to avoid too much regulation and account for the different risks across industries. UK regulation would be technology agnostic focusing on outcomes and whether systems prove adaptable and autonomous, as these types of AI are unpredictable and have more inherent risks.
Whilst this would give UK regulators flexibility, there would be key principles to follow when regulating an organisation’s use of AI:
- Ensure safe use of AI
- Ensure that the system is technically secure and works as intended
- Transparency and ability to explain
- Incorporate fairness considerations into the system
- Designate a legal entity as responsible for governance
- Creation of clear protocols on remediation and contestability
A white paper further detailing this topic was due out in late 2022, but that hasn’t happened yet. When this is released, it will provide more insight into whether the UK will move forward with official regulation and provide a better sense of the timeline.
AI will continue to integrate into society in a multitude of ways such as technology advance. Regulation in this space will help alleviate fears of bias, protect data, encourage innovation, and explain decision making. But will this type of regulation go global like what happened with privacy regulation after the passage of the GDPR? Will the EU set a global standard or will there be more movement in the UK or other countries like China that have already tested the waters on a smaller scale? Only time will tell. Right now, monitoring legislative developments and kicking off compliance initiatives is the best way to prepare.