Sweeping new regulation may unnecessarily restrict AI development
On 21 April 2021, the European Commission published a proposal of pristine legal framework in the form of a Regulation on Artificial Intelligence (“AI Regulation”) to govern the development, placing on the market and usage of artificial intelligence (AI) systems within the European Union.
The twin objectives are 1. to facilitate innovation and investment in AI in the EU while 2. making the EU a place where AI is safe and trustworthy for individuals.
The extensive 108-page AI Regulation intends to strengthen Europe’s potential to compete globally but also to regulate organisations developing or using AI technologies with extensive limitations never seen before.
AI systems, which use machines trained to perform jobs and make decisions on their own by studying huge volumes of data, are seen as one of the world’s most transformative technologies, bringing major gains in productivity. The proposed rules would limit the usage of AI in a range of activities, from self-driving cars to hiring decisions, bank lending, school enrolment selections and the scoring of exams. It would also cover the use of artificial intelligence by law enforcement and court systems — areas considered “high risk” because they could threaten above all people’s safety and fundamental rights.
If adopted, the regulation would have far-reaching implications for tech firms like Amazon, Google, Facebook and Microsoft, which have injected resources into developing these technologies, but it would also impact nearly all industries and service providers, e.g. pharmaceutical companies using software to develop medicine and banks assessing the credit worthiness of their clients.
Existing safeguards already offer significant protection
The European Commission is cautious to protect fundamental rights and intends to forbid algorithms that can be misused to track or frame people.
But does the EU really need further regulation to safeguard these freedoms?
The European approach is that AI must be trustworthy and secure. The catch is that we in the EU already have strict regulation on AI. The European Commission justifies the AI Regulation by negative scenarios from communist China (like tracking people, social scoring or the persecution of Uighurs) which could hardly occur in an EU protected by
the GDPR, the EU Charter of Fundamental Rights and other such frameworks. Other cited cases of possible AI abuse are racial discrimination against African-American tenants in the US (Atlantic Plaza Towers case), which led to intense controversy two years ago in New York, and the alleged discrimination against particular groups at work, which would both be stopped in the EU under an anti-discrimination law.
“Hence many experts and businesses are of the view that the new AI Regulation only duplicates what is already
regulated in the EU and brings nothing new but further unnecessary bureaucracy and obstacles to businesses.”
Legal definitions and scope of application
The AI Regulation defines “AI systems” in Article 3 as “software that is developed with machine learning, logic, and knowledge-based or statistical approaches” and “can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with“.
The main provisions of the AI Regulation are:
- Binding rules for AI systems applicable for those who are subjected to the new obligations,
- A list of certain prohibited AI systems,
- Extensive compliance obligations for high-risk AI systems,
From a territorial perspective, the AI Regulation would apply to all providers of AI systems who place the systems in the EU, to all users of AI systems established within the EU, as well as to providers and users of AI where the output produced by the system is used in the EU, irrespective of whether they are established within the EU.
From permitted to unacceptable AI
The rulebook aims at promoting “sustainable, secure, inclusive and human-centric AI through proportionate and flexible rules”.
To this end, the European Commission has designed a pyramidal scheme that splits AI systems into four categories according to their potential risk:
- Minimal risk: the new rules will not apply to these AI systems at all as they represent only minimal or no risk for citizen’s rights or safety. Companies and users will be free to use them. Examples include spam filters and video games. The European Commission believes that most AI applications within the EU will fall into this category.
- Limited risk: these AI systems will be subject to specific transparency obligations to allow users to make informed
decisions, to be aware that they are interacting with a machine and let them easily switch it off. Examples include
chatbots (chat assistants).
- High risk: given their potentially harmful or damaging implications on people’s personal interests, these AI systems
will be “carefully assessed before being put on the market and throughout their lifecycle”. The Commission expects
high-risk systems to be found in a variety of fields, such as transport, education, employment, law enforcement, migration and healthcare. Examples include facial recognition, surgical robots and applications to sort CVs from job candidates.
- Unacceptable: the Commission will fully ban AI systems that represent “a clear threat to the safety, livelihoods and rights of people”. Examples include social scoring by governments (such as China’s credit system), the exploitation of the vulnerabilities of children and the use of subliminal techniques (beyond a person’s conscious awareness).
New EU board created, national sandboxes recommended
The AI Regulation suggests establishing a European Artificial Intelligence Board composed of representatives from the EU Member States and the Commission in order to facilitate implementation of the regulation. The Board would advise and assist the European Commission.
At the national level, EU Member States would designate competent authorities to take all measures necessary to ensure that the rules are duly implemented.
Non-compliance with the proposed rules could be sanctioned with penalties of EUR 10 – 30 million or 2 – 6% of total worldwide annual turnover of a company (whichever is higher).
Member States will be encouraged to launch AI regulatory sandboxes to promote the safe testing and adoption of AI systems under the direct guidance and supervision of national competent authorities, with preferential treatment for SMEs and startups to support innovators who possess fewer resources. Competent authorities should also provide
tailored guidance to support SMEs and start-ups to ensure the regulation does not stifle innovation.
EU legislative battles inevitable
If the AI Regulation survives the legislative procedure, anyone with an idea for a new AI application will have to undergo complex legal research, or rather hire a lawyer, to make sure that the intended activity meets the quantum of obligations and will not be subjected to harsh sanctions.
“Artificial intelligence is to become a hugely regulated industry which could force smaller startups or spinoffs, especially from smaller countries, to try their luck outside of the EU.”
Those operating in high-risk areas – such as national infrastructure, education, employment, finance, and law enforcement – would face a series of hurdles before they could be used.
For example, CV-sorting software in recruitment or credit-scoring systems for bank loans would have to: (i) prove their accuracy and fairness; (ii) keep records of all their activity and (iii) have “appropriate human oversight”.
It is yet to be seen whether, when and in which form the AI Regulation will be enacted and whether it will set clear guidelines to make the EU a global hub for trustworthy artificial intelligence. It is expected that the EU legislative process will prove controversial, touching off a battle lasting at least until 2022.
Private sector concerned about outflow of investment and talent
Praising the risk-based approach of the regulation, DIGITAL EUROPE, the trade association of digital companies in Europe, said that “the inclusion of AI software into the EU’s product compliance framework could lead to an excessive burden for many providers”.
“After reading this Regulation, it is still an open question whether future start-up founders in ‘high risk’ areas will decide to launch their business in Europe,” wrote the association’s Director-General Cecilia Bonefeld-Dahl. A lack of investment in AI in the EU is a major factor why the EU is losing the AI race to the U.S. and China, according to DIGITAL EUROPE. There are currently about 446 million people living in the EU and 331 million people living in the U.S. But in the EU, $2 billion was invested in AI in 2020, while in the U.S., $23.6 billion was invested.
Mr Jan Klesla, the project leader of the European Centre of Excellence says: “New AI regulation can cause immense legal uncertainty, a brain drain and outflow of young entrepreneurs from the EU and the best gift for Boris Johnson to Brexit.”
“In addition to self-regulation by means of voluntary codes, the right way would be to adjust the individualJan Klesla (European Centre of Excellence)
regulations which we already have, for example, to resolve the question of who is responsible when Tesla on autopilot crashed into you, and which insurance will reimburse the costs.”
“These are the real problems of today, not theoretical threats to freedoms about which we do not even think in the EU”, stated Klesla.