international analysis and commentary

Europe should bet on Original Intelligence

The Italian version of this article is published in Aspenia 1-2026

35

AI accelerationists’ beliefs are shaping the commercialization of artificial intelligence in the United States. Prominent voices, such as Mark Andressen, Elon Musk, and Peter Thiel, have adopted the AI accelerationist ideology as their own. Accelerationists share a core belief that AI will deliver economic benefits that will increase overall prosperity in the nations that adopt it. They also believe that AI will inevitably develop to the point where it outperforms humans. They view human labor exclusively through the prism of economic output and compare the result of human labor to an AI’s performance of similar tasks. Accelerationists believe that the inevitability of AI’s development, up through and including sentience, must not be constrained. And they are convinced that the nation with the best performing AI will dominate the world economy.

Although others in the technology community have expressed concerns about the negative externalities of broad AI adoption, the accelerationists have effectively made what should be a conversation into an “either/or” debate. You use AI or you use human labor. You either share their beliefs or you are in the way.

The accelerationist view is now tightly entwined with US domestic politics and international policies, but it has not yet become dominant in the EU. This essay highlights the key issues raised by the AI accelerationist model and asks whether the EU could develop its own approach that would better reflect its social values while simultaneously being more beneficial economically.

 

AI’S SOCIAL CHALLENGES. AI is already a significant mediator between humans and their shared knowledge. This close relationship raises social challenges, regardless of the political and economic system in which AI is used. These include:

Human Dependency Risk. AI is specifically designed to create a tight, reinforcing relationship with humans by positioning AI chatbots and user interfaces as trusted sources of reliable information. This relationship is rapidly eroding the ability of humans who use AI to demonstrate cognitive variance from their AI tools. Growing academic literature, as well as anecdotal news reports, warns of AI causing humans to produce AI-created content that is generic and which doesn’t reflect clear human value-add. Other research shows that when people rely on AI, their own creativity and problem-solving skills suffer a measurable decline. This leads to an uncritical willingness to accept as truthful and accurate misinformation that is actually intended to manipulate. Finally, both research and anecdotes show a close correlation between AI use and self-harm in psychologically vulnerable people.

Privacy and Surveillance Risk. AI raises broad privacy and surveillance concerns. AI is not just a tool we use. It uses us too. AI applications need access to large amounts of human-created data to function. Questions about what information is ingested, its use, how it is controlled, and the rights retained by information creators all need to be answered. Whereas the EU has traditionally taken a strong role in protecting privacy, AI accelerationists strongly oppose enhanced privacy protections.

Cultural Bias Risk. Any output by artificial intelligence reflects its input by humans. The latter, until recently, have not created information for the explicit purpose of feeding it to AIs; the information was created for other purposes. We do know that much information and many data sets are biased, whether implicitly or explicitly. But because AI is a black box to anyone but its developers, we don’t know what went in. That ignorance makes it reasonable to worry about the direction and magnitude of any bias. This concern about bias is one driver for the EU AI Act.

AI Sameness Risk. An often-overlooked limitation of AI is that its output tends towards homogeneity. Its very architecture leads to this result. A state-of-the-art AI, with its seemingly unlimited knowledge, can leave a user feeling that the AI has provided a new, never-before-seen answer to a query. But the truth is that the AI will have provided many other users with essentially the same output.

Sameness is a very important limitation on AI’s promise of higher economic output. This is because if all businesses use similar AI models to become more efficient, over time they will all be equally efficient – and indistinguishable. When businesses compete only on efficiency, not only are margin improvements temporary, but they are only available to the better-capitalized businesses, who can afford the fastest computers and newest versions of AI. This means that for most businesses, the pathway to sustained margin improvement cannot come only through efficiency. It must come from differentiation. Humans value novelty in what they consume, but also in how they like to feel about themselves. The desire to find novelty is hardwired into the human condition and is a primary driver (along with the desire for safety) of human society and commerce. Novelty is the source of business differentiation.

The possibility of AI sameness undermining the profitability of the vast majority of businesses makes the promise of overall accelerating economic growth harder to accept. Absent an economic model that includes significant business differentiation, the case for rapid adoption of AI is weakened.

 

AI AS AN AMERICAN EXPORT. The Trump administration has embraced the role of champion and promoter of AI adoption consistent with the AI accelerationist ideology. Its approach since 2024 is best understood as a set of mutually reinforcing moves that emphasize speed, national coherence, and global leverage. These moves include:

  • Supporting National Champions. AI commercialization in the United States is led by a small number of large, well-capitalized companies, supported by a cadre of extremely wealthy entrepreneurs and venture investors. The extraordinary cost of developing and deploying successive iterations of AI has created a self-reinforcing cycle, one in which the costs of innovation and the required resources have caused the leading AI companies in the United States to separate from the rest and to receive a disproportionate share of available investment capital. This has created a mutual dependency, with these companies’ continued success becoming more tightly tied to the financial markets and to overall economic growth of the American economy.
  • The US government is a large market for artificial intelligence and commercial innovation. The Trump administration would rather purchase AI from national champions (and smaller companies in their business ecosystems) than from any other current technology and service providers. This has further expanded the national champions’ market power.
    Read also: How America’s trade war is Europe’s strategic problem

  • The Trump administration prefers that any limitations on AI adoption be dealt with at a national level. Indeed, the administration has advocated for national uniformity, reduced state experimentation, and a regulatory structure designed for speed. This viewpoint extends extraterritorially; for example, administration officials forcefully condemned recent attempts by foreign regulators to mitigate some of the negative externalities of AI adoption.
  • National security and geopolitical competition. A recurring stated rationale for the US approach to AI commercialization is great power competition with China. The US treats AI development as an arms race in which its adversaries will use artificial intelligence to injure American interests unless they are deterred from doing so. This leads to the US viewing the industry through the prism of mercantilism, or state sponsored capitalism, rather than adopting a free market approach. Conflating national security and economic interests, as mercantilism does, results in a foreign policy that requires political allies to purchase US AI technology and adopt the accelerationist model. Nations that take that path become technologically and economically dependent on US firms and subject to US government attitudes and actions.

 

EXPORTING EXTERNALITIES ALONG WITH AI. The American approach to AI development creates significant social and political challenges domestically. Furthermore, nations that adopt the US approach are likely to import these challenges into their own societies. Such challenges include:

  • Trust and “synthetic reality.” AI-derived misinformation has overwhelmed social networks and polluted the training data of AI. It has already created a corrosive equilibrium where skepticism becomes the default position and shared reality becomes harder to sustain.
  • Labor transformation and inequality. Artificial intelligence is most enthusiastically used as a tool to enhance efficiency and to substitute human labor. A policy environment that accelerates AI deployment but does not support robust worker transition will almost certainly increase inequality. This will be further exacerbated by a continued substitution of AI for middle-skill and early-career work, threatening people’s ability to build expertise.
  • Institutional detachment. Bringing AI into government operations and the accompanying reduction in human labor has caused many institutions that interact with citizens to become more efficient but less humane. Citizens experience decisions as opaque, automated, unappealable and, ultimately, illegitimate.
  • Ancillary “deregulation.” Rules and norms that would not necessarily be seen as directly relating to AI adoption are potential obstacles to the accelerationists’ broader goals of loosening constraints and imposing cultural conformity. Affected sectors include higher education, research and development, immigration, government worker protection, and social insurance. There is a significant possibility of a legitimacy gap if government is seen as promoting the economic interests of this small and identifiable group.
  • Concentration dynamics. The AI industry will naturally concentrate because cutting-edge systems require capital-intensive computing power, scarce talent, and unprecedented data access. When policy emphasizes speed and a minimal regulatory burden, market power can concentrate further, turning a small number of firms into gatekeepers for entire sectors. This is not only an antitrust concern; it is a democratic concern when the same platforms mediate information and commerce.

 

EUROPE’S OPPORTUNITY. On the benefit side, AI does offer some economic advantages. It makes knowledge available at a scale and breadth that is unprecedented in human history. The ability to use this wealth of information conveniently and quickly will provide both efficiency and an opportunity for new insights and social progress. The downside, however, is that absent differentiation, AI will largely be only an efficiency tool. It will continue to be a substitute for human labor, rather a complement to human output. This may be the most damaging tenet of AI accelerationists: that the substitution of human labor is both inevitable and desirable. A substantial portion of the negative externalities of the current US approach to AI stems from this core belief. The EU does not have to follow the US example.

Humans are relevant to AI accelerationists only as something to be eliminated: as artificial intelligence replaces human labor, efficiency and economic growth will follow. But humans are the only engines of originality and differentiation that can create competitive advantage. A focus on human originality changes the relationship between humans and AI from one of subtraction to one of addition. Human originality needs to be part of the economic proposition.

 

Read also: IA: il vero nemico dell’Europa è l’Europa

 

Emerging research shows that when humans use AI as an adjunct to their own individual creativity, the resulting output is more economically useful and differentiated than what each can do on its own. This capacity to use human creativity to devise something truly novel is called Original Intelligence. It is the output of the humans’ natural drive to seek novelty and to benefit from it. Moreover, while people may differ in how they use their Original Intelligence, they all have it. And, as a human capability, it can be increased through training or use. For example, a student can learn how to use AI as a research tool and then add original insights and analysis. Or an engineer can use AI to create a prototype and then apply personal experience to create a commercial product. A politician may use AI to find examples of alternative legislative proposals addressing a single issue and then evaluate them based on personal morality, experience, and judgment.

Novelty is the human value-add. Only humans can provide novelty in an AI world. Whether it comes in the form of a differentiated consumer good or a connection with others in one’s role as a citizen or social actor, a society that understands the value of novelty – and thus values Original Intelligence – will be wealthier and happier.

Developing a model of AI adoption that promotes the Original Intelligence possessed by all its citizens would be consistent with the EU’s cultural norms and history. This human-forward model would put people at the center of the AI adoption discussion. People would not be seen merely as a labor force to be replaced, but rather as an economic driver to be embraced. Restoring and emphasizing the dignity of labor would reinforce democratic norms, because citizens would feel that they mattered. After all, everyone wants to feel special in some way.

Rather than importing the burdens of AI adoption as promoted by the United States or China, Europe has an opportunity to define its own model for adopting artificial intelligence. An economic model driven by the value added by people, as economic actors and citizens, completely changes the framing of policies and regulations. Instead of inhibiting growth, ground rules that clearly place humans’ Original Intelligence at the center of AI adoption would create a high-performing democratic economy. This approach would have the following core attributes:

  • Trust-by-design as market advantage. Provenance, documentation, and accountability mechanisms (especially for high-risk uses) should be required, so that AI can be adopted without corroding public confidence. This is consistent with Europe’s regulatory tradition and its current risk-based framework.
  • Social justice as technology strategy. AI should not replicate discrimination in hiring, credit ratings, housing, public services and similar activities. This is not only moral; it also prevents social fragmentation that undermines long-term innovation and growth.
  • Public sector as a model user of accountable AI. Governments should deploy AI in ways that strengthen due process: clear explanations, human oversight for consequential decisions, and robust rights to appeal. Done well, the public sector can set a trust standard that private markets will want to follow.
  • Education as the engine of Original Intelligence. Every effort should be made to invest in teaching and in the development of critical thinking, civic reasoning, media literacy, and creative capacity. If AI makes “answers” cheap, education must focus more on the questions, on interpretation, ethics, and judgment.
  • Cultural and economic protection for authentic creation.In a world of synthetic content abundance, Europe can strengthen creator rights and incentives for individuals who create novelty, allowing more humans – rather than a small number of AI intermediaries – to benefit from their labor.
  • Eurocentric AI. The EU should develop or favor AI models that capture Europe’s cultural history, traditions, and artifacts, rather than those of the United States or China.

These attributes are not about slowing down AI adoption in Europe. Rather, they are the framework for an approach to AI transformation that reflects European culture and values. The path towards greater economic growth and social health can be forged through the adoption of human Original Intelligence.

 

 

*The Italian version of this article is published in Aspenia 1-2026