The European Commission is contemplating a brief ban on the usage of facial recognition expertise, in accordance with a draft proposal for regulating synthetic intelligence obtained by Euroactiv.
Creating rules to make sure AI is ‘trustworthy and human’ has been an early flagship coverage promise of the brand new Commission, led by president Ursula von der Leyen.
But the leaked proposal suggests the EU’s govt physique is in actual fact leaning in direction of tweaks of current rules and sector/app particular risk-assessments and necessities, reasonably than something as agency as blanket sectoral necessities or bans.
The leaked Commission white paper floats the concept of a three-to-five-year interval wherein the usage of facial recognition expertise may very well be prohibited in public locations — to provide EU lawmakers time to plot methods to evaluate and handle dangers round the usage of the expertise, comparable to to folks’s privateness rights or the chance of discriminatory impacts from biased algorithms.
“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, including that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”
However the textual content raises rapid issues about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its choice “at this stage” is to depend on current EU knowledge safety rules, aka the General Data Protection Regulation (GDPR).
The white paper accommodates quite a few choices the Commission continues to be contemplating for regulating the usage of synthetic intelligence extra typically.
These vary from voluntary labelling; to imposing sectorial necessities for the general public sector (together with on the usage of facial recognition tech); to necessary risk-based necessities for “high-risk” functions (comparable to inside dangerous sectors like healthcare, transport, policing and the judiciary, in addition to for functions which may “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to focused amendments to current EU product security and legal responsibility laws.
The proposal additionally emphasizes the necessity for an oversight governance regime to make sure rules are adopted — although the Commission suggests leaving it open to Member States to decide on whether or not to depend on current governance our bodies for this process or create new ones devoted to regulating AI.
Per the draft white paper, the Commission says its choice for regulating AI are choices three mixed with 4 & 5: Aka necessary risk-based necessities on builders (of no matter sub-set of AI apps are deemed “high-risk”) that would lead to some “mandatory criteria”, mixed with related tweaks to current product security and legal responsibility laws, and an overarching governance framework.
Hence it seems to be leaning in direction of a comparatively light-touch strategy, centered on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/makes use of — and which doubtless received’t stretch to even a brief ban on facial recognition expertise.
Much of the white paper can be take up with dialogue of methods about “supporting the development and uptake of AI” and “facilitating access to data”.
“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”
EU commissioner Thierry Breton, who oversees the inner market portfolio, expressed resistance to creating rules for synthetic intelligence final yr — telling the EU parliament then that he “received’t be the voice of regulating AI“.
For “low-risk” AI apps, the white paper notes that provisions within the GDPR which give people the suitable to obtain details about automated processing and profiling, and set a requirement to hold out a knowledge safety influence evaluation, would apply.
Albeit the regulation solely defines restricted rights and restrictions over automated processing — in situations the place there’s a authorized or equally important impact on the folks concerned. So it’s not clear how extensively it will in actual fact apply to “low-risk” apps.
If it’s the Commission’s intention to additionally depend on GDPR to manage increased threat stuff — comparable to, for instance, police forces’ use of facial recognition tech — as a substitute of making a extra specific sectoral framework to limit their use of a extremely privacy-hostile AI applied sciences — it might exacerbate an already confusingly legislative image the place regulation enforcement is worried, in accordance with Dr Michael Veale, a lecturer in digital rights and regulation at UCL.
“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he advised TechCrunch.
“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”
An advisory physique set as much as advise the Commission on AI coverage set out quite a few suggestions in a report final yr — together with suggesting a ban on the usage of AI for mass surveillance and social credit score scoring programs of residents.
But its suggestions had been criticized by privateness and rights specialists for falling quick by failing to know wider societal energy imbalances and structural inequality points which AI dangers exacerbating — together with by supercharging current rights-eroding enterprise fashions.
In a paper final yr Veale dubbed the advisory physique’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.