Beyond Europe's AI Strategy: Global Governance for the Fourth Industrial Revolution
Beyond Europe’s AI Strategy: Global Governance for the Fourth Industrial Revolution
Carolina Polito*
On 19 February 2020, the European community welcomed the publication of three new documents that will drive the European Digital Agenda for the five years of the new von der Leyen’s presidency. The documents are the European data strategy, the White Paper on Artificial Intelligence and the Report on Safety and Liability implications of AI, the Internet of Things and Robotics.[1] Together, these documents offer a comprehensive overview of European priorities for the Fourth Industrial Revolution.
The main objective underpinning the European data strategy, informed by the conviction that the value of data lies in its pooling and storage, is the creation of a single European data space in which information flows freely and safely. To accomplish this objective, the EU will establish mechanisms to improve how data is shared, including via common contractual obligations on presentation, so as to make it accessible across member states. Additionally, the EU has planned to allocate 2 billion euro in annual investments as an enabler for its overall data strategy.[2]
Such a strategy will be crucial for the establishment of a European artificial intelligence (AI) ecosystem, given that advances in the development of AI technologies are proportional to the ability of collecting data that feed them. According to the White Paper on AI, this ecosystem will be grounded in two principles: excellence and trust.[3]
With respect to the former, the EU will allocate new funding that, if combined with private resources, are expected to reach 20 billion euro per year. Moreover, the EU is planning to create a network of centres of excellence, to improve the EU digital infrastructure, and to develop mechanisms that allow small and medium enterprises (SMEs) to better reimagine their business model so as to incorporate AI.
With respect to trust, the EU will define, based on the recommendations of the High Level Expert Group (HLEG) on AI, the fundamental requirements for AI implementations. Developers will be required to be able to back trace their data so as to prove their integrity and completeness. Finally, the creation of a common labelling framework for ex-ante assessments of the trustworthiness of AI systems would serve as a basis for establishing trust among member states.
Enhancing the AI sector is particularly relevant for Europe given the current international context characterised by a race for AI technologies. Emblematic, in this respect, is a renowned quote by Russian President Vladimir Putin: “whoever becomes the leader in this sphere will become the ruler of the world”.[4]
In the race for achieving a global innovation advantage in AI, “the United States currently leads […], with China rapidly catching up, and the European Union behind both”.[5] While the EU does have the capabilities to compete with its peers in terms of research and talents, its inability to retain skilled expertise in the sector, combined with smaller investments in venture capital and private equity funding and a reduced access to data, all contribute to its lagging behind the US and China.
In this context, the EU has not only less ability to enjoy the benefits of AI adoption, but, most notably, is less able to contribute to global AI governance.[6] Being the latter a priority for the new EU commission, establishing a strategy to boost a European AI ecosystem is crucial.
The role the EU aims to play in AI governance could be particularly noteworthy considering the infancy of the field. The first international effort in governing the implementation of this technology, other than the ethical framework published by the EU HLEG, has been the publication of the OECD Principles on Artificial Intelligence, also endorsed by the G20 Ministerial Meeting on Trade and Digital Economy.[7] While such an effort represents an important first step in AI governance, most countries seem to be struggling on how to internally approach the issue. Hence, both their intention and ability to further international dialogue currently boil down to declarations of intents.
The “wide variation in risk-appetite” among different countries contributes, according to Miailhe, to hamper the ability for states to collaborate in establishing a common framework for AI governance. While some regions, such as the EU, are more inclined to regulate the implementation of AI systems by taking into account privacy, fair treatment and security, others prioritise innovation with little attention to the risks.[8]
Shortfalls in rapidly establishing a common framework for AI governance, however, could have a disruptive long term impact. Against the backdrop of an international AI race, states and companies could neglect implementing the adequate safety precautions. Given that the established market model is characterised by strong network and scale effects, first-mover gains in adopting AI technologies are particularly strong. Winning the AI race is therefore expected to provide tremendous power and wealth to the country gaining this advantage over its competitors.
In this context, states could be incentivised in pursuing those gains while sidestepping other societal concerns, among which security. The risk is that as the perceived benefits increase, so too does the corresponding incentive to cut corners on safety considerations.[9] Those suffering the most form this race-the-bottom dynamic are not only the least developed countries, but also SMEs and stat-ups in the most developed ones that currently lack the financial and legal capabilities to perform ex-ante evaluations on product safety and robustness.
These risks are particularly pressing if one considers the degree of vulnerability that characterises AI systems. AI systems are subject to various types of adversarial attacks, among others, data poisoning, tampering of the categorisation model, or backdoors. Moreover, AI attacks fundamentally differ from traditional cyberattacks. Cybersecurity vulnerabilities are the results of human mistakes in writing the codes and, as such, can be found and patched.
AI attacks, instead, do not leverage bugs in the programmes but result from inherent limitations in the AI systems themselves, and are thus much more difficult to contain.[10] As artificial intelligence systems are further integrated into critical components of societies, more effort will be required to mitigate these security risks.
Against this background, the EU should leverage the current vacuum in global AI governance and pave the way towards establishing a minimum degree of safety and security on AI products. Specifically, the EU could condition access to its market on the implementation of such minimum standards. In this respect, the implementation of the General Data Protection Regulation (GDPR) is generally regarded as positive example.
According to Anu Bradford, given that digital services are often indivisible, tech companies have considered it more profitable to adopt the GDPR as a global standard rather than differentiate their services for markets outside the EU.[11] In the same way, by leading through example, the EU could help shape the international debate on how to govern the implementation of AI systems.
Whether the EU will manage to acquire the desired international relevance in this niche will also depend on the degree of sensitivity the issue will gain internationally. The GDPR was implemented in an international context where public scrutiny on privacy concerns was exceptionally high because of scandals such as those revealed by whistle-blower Edward Snowden. This incentivised both countries and companies to take appropriate measures.
It remains to be seen whether countries and companies move towards prioritising the security and robustness of AI systems independently, or if a major cyber-attack on these systems will be needed to jolt them into action. Undoubtedly, the EU should lead the way in this effort, promoting multilateral AI governance and establishing new rules and regulations that prioritise security and safety over quick returns. Political and financial momentum towards these objectives needs to be established now, before the next crisis hits.
* Carolina Polito collaborates with the Tech-IR Programme of the Istituto Affari Internazionali (IAI).
[1] European Commission, A European Strategy for Data (COM/2020/66), 19 February 2020, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020DC0066; White Paper on Artificial Intelligence - A European Approach to Excellence and Trust (COM/2020/65), 19 February 2020, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020DC0065; Report on the Safety and Liability Implications of Artificial Intelligence, the Internet of Things and Robotics (COM/2020/64), 19 February 2020, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020DC0064.
[2] European Commission, A European Strategy for Data, cit.
[3] European Commission, White Paper on Artificial Intelligence, cit.
[4] “‘Whoever Leads in AI Will Rule the World’: Putin to Russian Children on Knowledge Day”, in RT News, 1 September 2017, https://www.rt.com/news/401731-ai-rule-world-putin.
[5] Daniel Castro, Michael McLaughlin and Eline Chivot, “Who Is Winning the AI Race: China, the EU or the United States?”, in Center for Data Innovation Reports, August 2019, p. 2, https://www.datainnovation.org/?p=11345. The report examines six categories of metrics: data, adoption, talent, research, development, hardware.
[6] Ibid., p. 3.
[7] Japan Ministry of Economy, Trade and Industry, G20 Ibaraki-Tsukuba Ministerial Meeting on Trade and Digital Economy Held, 10 June 2019, https://www.meti.go.jp/english/press/2019/0610_003.html.
[8] Nicolas Miailhe, “AI & Global Governance: Why We Need an Intergovernmental Panel for Artificial Intelligence”, in AI & Global Governance insights, 20 December 2018, https://cpr.unu.edu/ai-global-governance-why-we-need-an-intergovernmental-panel-for-artificial-intelligence.html.
[9] Stephen Cave and Seán S. ÓhÉigeartaigh, “An AI Race for Strategic Advantage: Rhetoric and Risks”, in AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, December 2018, p. 36-40, https://doi.org/10.1145/3278721.3278780.
[10] Marcus Comiter, “Attacking Artificial Intelligence. AI’s Security Vulnerability and What Policymakers Can Do About It”, in Belfer Center Papers, August 2019, https://www.belfercenter.org/node/122046.
[11] Anu Bradford, The Brussels Effect. How the European Union Rules the World, New York, Oxford University Press, 2020, p. 142-143.
Published with the support of the Policy Planning Unit of the Italian Ministry of Foreign Affairs and International Cooperation pursuant to art. 23-bis of Presidential Decree 18/1967. The views expressed in this report are solely those of the author and do not necessarily reflect the views of the Italian Ministry of Foreign Affairs and International Cooperation.
-
Details
Rome, IAI, March 2020, 4 p. -
In:
-
Issue
20|12
Topic
Tag
Related content
-
Ricerca09/02/2021
The Geopolitics of Digital
leggi tutto