Advancements in AI technology are currently accelerating at breakneck speed. Many features that used to be the stuff of science fiction movies (smart homes, natural language processing tools that produce human-like results, augmented analytics, AI algorithms that can beat professional chess and poker players) and novels have become an intrinsic part of our everyday reality.
The lightning-fast adoption of generative AI tools and their potential to shake the foundations of society has triggered a wave of urgent calls for guidelines and regulations. AI experts, industry leaders and governments alike are advocating for some degree of regulation to manage the impact of AI on education, work and society as a whole.
The European AI Act is a European effort to regulate and govern the development and implementation of new AI technology, especially the AI applications that pose a potential risk to data safety or privacy protection. What exactly is the European AI Act? What does it try to achieve and solve? When will it be ready? And why is this piece of legislation important to you and your organization? Read on and get the answers to these pressing questions!
What is the European AI Act?
The Artificial Intelligence (AI) Act is a proposed European law on artificial intelligence and the first of its kind. The law assigns AI-powered applications to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. The second category consists of high-risk applications, such as CV-scanning tools that rank job applicants. These applications are subject to specific legal requirements. Applications that are not explicitly banned or listed as high-risk are largely left unregulated and form the third category.
Why is it important?
Pieces of regulation such as the European AI Act are important for a number of reasons. Because the rules follow a risk-based approach, it becomes easier to identify AI systems with an unacceptable level of risk to people’s safety or AI applications that purposefully use manipulative techniques, exploit people’s vulnerabilities or are used for social scoring.
The European AI Act also has the potential to determine obligations for providers of foundation models. This means that developers of AI systems have to guarantee robust protection of fundamental rights, health and safety, the environment, democracy, and rule of law. They would need to assess and mitigate risks, comply with design, information and environmental requirements, and register in the EU database. The new law also promotes regulatory sandboxes (controlled environments) established by public authorities to test AI solutions before their actual deployment.
All in all, the risk-management and transparency rules in the European AI Act provide some promising legislative building blocks for the ethical and human-centric development of AI systems in Europe. This decreases the risk of intrusive and discriminatory use of advanced AI solutions in the near and more distant future.
Why should it matter to you?
AI already affects many parts of your business conduct and personal lives. Predicting what content your target audience and customers want to see? Capturing and analysing data for commercial purposes or customer service optimization? Establishing and documenting medical and scientific breakthroughs? Optimizing and personalizing customer journeys?
In this day and age, good AI tools are a necessity if you want to achieve one or more of these goals. Like the EU’s General Data Protection Regulation (GDPR), which was established and came into power in 2018, the EU AI Act could become a global standard, determining to what extent AI has a positive rather than negative effect on your business or life, regardless of the country you live in or conduct business from.
When will it be ready?
Although the EU AI Act harbors tremendous potential for the responsible use of present-day and future AI technology, the document is still a draft that has yet to be approved by EU politicians and lawmakers. Before negotiations with the European Council on the final form of the law can begin, the draft negotiating mandate needs to be endorsed by the whole European Parliament first.
The aim is to reach an agreement by the end of 2023. However, The Act may not come into force until 2026. Revisions are likely, especially if you look at how rapidly AI has advanced in recent years. The legislation has already gone through several updates since drafting began in 2021. The law is likely to expand as AI technology develops and moves into new practical realms.
Will the EU AI Act actually do anything?
Several tech followers and AI experts, especially the ones who have a rather pessimistic outlook on AI, seriously doubt if the EU AI Act will make a big difference and is going to hold true to its promise. They point out that innovation often moves at a faster pace than lawmaking.
There are also still several loopholes and exceptions. To give you an example. Facial recognition by the police is banned unless the images are captured with a delay or the technology is being used to find missing children. In addition, the law is also quite inflexible. If in two years’ time a dangerous AI application is used in an unforeseen sector, the law provides no mechanism to label it as “high-risk”.
Several renowned institutions and think tanks, such as the Future of Life Institute, Leverhulme Centre for the Future of Intelligence, Centre for the Study of Existential Risk, Access Now Europe, and the Future Society have already shed their light on the EU AI Act and provided constructive and thought-provoking ideas (ensuring that governance remains responsive to technological trends, stronger measures to reduce all associated risks, considering the impact of applications on society at large and not just the individual) that have the potential to improve the law.
Conclusion: the potential is there
The EU AI Act definitely has the potential to lead the way in making AI human-centric, trustworthy and safe, and set the European way for dealing with the extraordinary changes that are already happening. However, creating full democratic oversight and a mature and flexible system of AI governance and enforcement probably requires a number of extra checks and balances and a high level of adaptability.