The continuing technology revolution: innovation, rules and competition

from the Aspen Italia's Transatlantic Conference

The balance between the freedom of research and regulation is delicate. Freedom of research is vital for fostering creativity and exploration, which are fundamental for technological breakthroughs. However, unchecked freedom can also lead to unintended consequences, including ethical breaches or harmful applications of technology. Thus, responsible regulation, cross-sector cooperation and public-private partnership is essential to ensure safety, privacy, and ethical considerations.

 

How to spur responsible innovation: regulation and freedom of research

Even if sometimes regulation and bureaucracy have been considered hindering innovation, there are numerous examples of responsible innovation spurred by well-balanced regulations.

In the field of medical technology, for instance, both the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) have established rigorous processes for the evaluation and approval of new technologies, ensuring both safety and efficacy. These processes have been allowing the development and adoption of innovative technologies, while protecting public health.

Such positive outcomes emerged especially during the Covid-19 pandemic, where special research and experimentation regulations were applied. These regulations, together with important public and private investments, made it possible to make new and innovative vaccines available to citizens in less than a year.

This major achievement is directly proportional to public funding in pure science and innovation in basic science research and biotechnology[1]. Without these drivers, together with the international cooperation, such as the sharing of the virus genome, it would have been much more challenging to achieve the rapid and remarkable success obtained with the development of the mRNA vaccine[2] (and further unceasing results such as lastly the creation of synthetic human embryos)[3].

Read also: In favor of pure science

In recent years, however, although OECD countries increased their R&D investments from 2.1% of GDP in 2000 to 2.7% of GDP in 2022[4], problems have arisen from the prohibitive costs and limitations of scientific papers, which are subject to relevant accessibility barriers.

Addressing this issue, the EU Council has recently asked the Commission and the Member States to support policies that would improve the scholarly publishing model towards a not-for-profit, open access and multi-format scenario, with no or limited costs for authors or readers[5].

 

Developing common digital rules for a transnational domain

The US and EU, as key players in the digital economy, have a shared interest and responsibility in shaping common digital rules that can foster cooperation, promote fair competition, and protect users.

Generally, the US and EU have different approaches to digital regulation. The US has traditionally taken a laissez-faire approach, with emphasis on self-regulation and market forces. The EU, on the other hand, has adopted a more interventionist approach, lastly with regulations like the General Data Protection Regulation (GDPR), the Digital Markets Act, the Digital Services Act and now the AI Act.

 

Read also: Cybersecurity: tra sfida legislativa e promozione dell’igiene digitale

 

While these divergent approaches reflect the distinct socio-political contexts of the two sides of the Atlantic, they may also create friction, leading to political and commercial uncertainty both for citizens and businesses.

Given these divergences, harmonization, cooperation and developing common rules for digital markets should be at the center of transatlantic diplomacy.

After two years of activity, the EU-US Trade and Technology Council has addressed some key technological dossiers, from the TTC Joint Roadmap for Trustworthy AI and risk management[6], to global and secure digital connectivity, from building resilient semiconductor supply chains to the development and implementation of sustainable electro-mobility.

 

Read also: The potential role of the US-EU Trade and Technology Council in a rapidly changing global economic order

 

Although it is only just beginning, this path of high-level political collaboration can be an important instrument to avoid international disagreements and bring about a much-needed convergence, while preserving the sovereignty and strategic autonomy of the two Atlantic coasts.

The development of common digital rules is a complex and delicate task, requiring careful negotiation and political compromise. However, if the US and EU are able to successfully navigate these challenges, they can aim to set a shared global standard for digital governance through spillover effects, promoting a free, open, and safe digital environment for all.

 

Managing the hi-tech competition with China: a delicate balance

In recent years, the relations between the EU, US, and China have been shaped by significant geopolitical and technological shifts. This is reflected in the evolving dynamics and policy approaches of these powers towards critical areas such as trade and artificial intelligence.

While Brussels considers China a partner in some fields, an economic competitor in others, and a strategic rival overall, it plans to recalibrate its China policy. The EU has recognized that coordination with the US is essential.

Thus, reflecting a shared language on both sides of the Atlantic, instead of decoupling, the focus can be on “de-risking” and “coopetition”[7].

The US and EU have expressed deep concern about foreign information manipulation, interference, and disinformation. The amplification of Russian disinformation narratives by China, particularly regarding Russia’s invasion of Ukraine, serves as a stark reminder of the challenges posed by this information war.

At the same time, the most striking field of competition is going to be industrial innovation and technology leadership. Just to mention one recent example, after a few months of the breakthrough release of new generative AI technology, Alibaba launched its own AI assistant/chatbot based on its proprietary Large Language Model[8].

Considering the tensions around Taiwan, a crucial hub for global semiconductor production and consequently for the development of all the latest technologies, including AI, Western countries must pursue a balanced policy towards China.

 

The extraordinary challenge of non-human intelligence: is AI a unique case?

AI systems have been already used in a wide range of technologies and applications, escalating recently to the global attention after the development of the Large Language Models.

AI can profoundly contribute to fundamental human activities, from the creation of new medicines, to new tools to help solve climate change, to enhancing the expression of creativity, democratizing and making education more effective, changing our relationship to work by performing basic or repetitive tasks and offering us new tools for the more advanced ones, etc[9].

Although we stand on the cusp of a new era, probably having the potential to improve the lives of billions of people, we need to acknowledge that AI will come with challenges to be addressed[10].

However, the recent calls for a halt to technological advances are unlikely to be successful or effective, and risk missing out on AI’s substantial benefits.

A broad and global based approach – involving governments, companies, universities and more – is key to set a shared agenda for a responsabile AI, through joint industry standards, sound government and multilateral policies and international collaboration.

In this perspective, the recent EU-US TTC Joint Roadmap for Trustworthy AI and Risk Management can be a useful example of a multilateral collaboration in this field, pushing towards a new set of global principles and standards, as it has been for many breakthroughs in the last decades.

The Roadmap aims to advance shared terminologies and taxonomies, share their approaches to AI risk management and trustworthy AI, establish a shared hub of metrics and methodologies for measuring AI trustworthiness, and develop knowledge-sharing mechanisms to monitor and measure existing and emerging AI risks.

2023 will be a critical year for developments in AI policy. In the next few months, the EU co-legislators will finalize the long-awaited AI Act, whose main policies will be entering into force in the following  three years, while other analyses are in progress – e.g. by the European Data Protection Board on the privacy issues – and national legislations are under consideration.

The United Kingdom announced a first global summit on AI later this Autumn, bringing together the world’s top leaders with experts in the field and industry representatives[11].

Like-minded countries and companies, in fact, need to work together to develop an international framework to ensure the safe and reliable development and use of AI.

Currently, a key risk to be avoided is that excessive regulatory fragmentation leads to a slowdown in the adoption of AI for citizens and businesses, undermining technological innovation and the socio-economic opportunities that AI offers.

 

 


Footnotes:

[1] See: Aspen Institute Italia, “In Favor of Pure Science“, in the framework of the”Aspen Global Initiative in Favor of Pure Science” and of the global report “In favor of Pure Science“.

[2] See among others: A. Stazi, Più investimenti per la scienza pura, Formiche, 9 April 2023.

[3] See: H. Devlin, Synthetic human embryos created in groundbreaking advance, The Guardian, 14 June 2023.

[4] OECD Data, Gross domestic spending on R&D, Main Science and Technology Indicators, 2022.

[5] Council of the European Union, Council conclusions on high-quality, transparent, open, trustworthy and equitable scholarly publishing, 23 May 2023.

[6] Trade and Technology Council, TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management, 1 December 2022.

[7] See among others: European Commission, Competition Policy: Bilateral relations with Peoples’ Republic of China; F. Ying, Cooperative Competition Is Possible Between China and the U.S, New York Times, 24 November 2020; G. Ghidini, A. Stazi, Coopetition: The Role of IPRs, in D. Beldiman (ed.) Innovation, Competition and Collaboration, Edward Elgar, June 2015.

[8] A. Kharpal, Alibaba begins rollout of its ChatGPT-style tech as China A.I. race heats up, CNBC, 1 June 2023.

[9] See, among others: McKinsey & Company, The economic potential of generative AI: The next productivity frontier, 14 June 2023; M. Andreessen, Why AI Will Save the World, Andreessen Horowitz, 6 June 2023.

[10] See e.g.: K. Walker, A policy agenda for responsible AI progress: Opportunity, Responsibility, Security, 19 May 2023.

[11] UK Government, UK to host first global summit on Artificial Intelligence, Press release, 7 June 2023.

 

 

technologyEUdigitalresearchsciencesocial trendsAIsocietyUSA
Comments (0)
Add Comment