The Global Empire of Artificial Intelligence

Whoever controls artificial intelligence will rule the world. This belief is becoming more entrenched by the day, shared by everyone—some with fatalism, others with anxiety, and some with euphoria.

by Marcello Veneziani

Whoever controls artificial intelligence will rule the world. This belief is becoming more entrenched by the day, shared by everyone—some with fatalism, others with anxiety, and some with euphoria. But once this premise is stated, we get lost in the paths that follow.

At every turn, new threats and a swarm of doubts arise. The first question is: will a single Power ultimately control the world through AI, or will there be multiple competing, converging, or antagonistic entities locked in a final battle of Titans? Will it be a public or private entity—a state, a tech corporation, or a single magnate? Will power lie with politicians or technocrats, or in an alliance between the two—and if so, who will hold true dominance?

Under a vast global AI empire, will local autonomy, free zones, and spaces of liberty still exist? Will the struggle for AI control remain bloodless, a purely commercial and technological competition, or will it bring dire consequences for entire populations? Will the main conflict be between the U.S. and China, or will other key tech players such as India, South Korea, and Russia enter the fray? And will it be a clash between two imperial states or between two supra-state tech giants—like the brewing battle between DeepSeek and OpenAI (along with Meta and others)?

And finally, the ultimate question: are we sure that AI will even be controlled by anyone? Or will it end up ruling over all, expanding on its own, unchecked, shaking off all human power?

A long but necessary series of questions—because asking the right ones is the first step in understanding what is unfolding.

If we are aware, we all experience a threefold unease. First, we sense that matter itself—its creators, its boundaries, its actors—is exceeding its limits, spilling beyond our control, erasing the world as we know it. Second, the outcomes of this expansion are unpredictable and uncontrollable, accelerated at a pace that surpasses our ability to comprehend and process its effects. Third, there is not only an intellectual and psychological incapacity to set limits but also an ethical, cultural, and even metaphysical failure: when does it bring benefit, when does it cause harm, to what extent is it useful, and at what point does it become dangerous?

We risk surrendering in advance to this unfolding process—because it is too fast, too steep, too viral. We are living through the pandemic of artificial intelligence, a state of turmoil and panic, though often disguised or momentarily forgotten.

Its repercussions are enormous in every field—military, financial, political, social, and human. But there is also a profound psychological impact, what is often referred to as psyop—a psychological operation that deeply influences minds.

We have come to believe that a totalitarian state, like China, is better equipped to control AI’s growth compared to a democratic state—thanks to centralized command, rapid decision-making, and tighter control over processes. Yet, even in China, technological warfare sometimes seems to slip from government control. Consider the tensions between DeepSeek and the Chinese giant Alibaba, which reveal contradictions within China’s hybrid system of capitalist autocracy—between the autonomous power of capital and the authoritarian strength of the regime.

Trump’s America appears more alert and proactive on AI than during Biden’s tenure when the U.S. seemed sluggish on the issue (much like his personal intelligence). But even here, it is unclear to what extent the state will control the web oligarchs—or whether, instead, they will dictate the state’s direction. There may even be internal conflicts within the tech industry itself, as seen in the growing tensions between Musk and his titanic competitors.

Leaving AI entirely to free-market forces would be dangerous. Greater oversight is needed—artificial intelligence cannot be allowed to run unchecked.

Pathetic is Europe’s slumber—a sleep of children, the elderly, and the feeble-minded, all equally powerless to govern AI from within, let alone confront external threats. With the added risk that, in an anti-Trump stance, Europe could become China’s Trojan horse in the West. Some are already working toward that.

Will it be possible to defend the plurality of worlds, the diversity of peoples, in the face of the overwhelming advance of technology, which tends to unify the planet and erase all differences? It will only be possible if a superior power governs AI—one that is itself intelligent, capable of setting rules, boundaries, development paths, and prohibitions, guiding the process without being consumed by it. In short, generative artificial intelligence—the most dangerous threat to humanity, despite its many benefits—must be steered with intent.

To manage AI’s impact, technological and economic power must be subordinated to political decision-making, the public good, social welfare, culture, and human nature. It is also worth examining how much of the competition between Chinese and American technology stems from plagiarism, espionage, and what is euphemistically called “distillation,” and how much remains a product of distinct creativity and originality—shaped by the differing mindsets of the West and the East. A Chinese or Indian mind is not the same as a European or American one.

The techno-anxiety that grips people—at least those most aware—is not just about AI-driven dictatorship. It is also about its ability to alter processes, minds, and structures in ways that are profoundly unsettling, bringing a stark defeat of the human and the real, and the triumph of automation and the virtual. To confront its advance, we need more political sovereignty, greater cultural awareness, deeper human intelligence, more humanitas, and a stronger vision of destiny.

The ultimate question looming in the background is the very essence of artificial intelligence—not just whether it will replace humanity or expand its capabilities, but a deeper question: will AI replace humans, or will it replace the divine? Will it become an omnipotent god, taking the place of the known and unknown God that has shaped human history?

Let’s risk a prediction: if AI is not governed with knowledge and power, it will first render humanity obsolete and then replace it. And after replacing humans and their real world, it will replace the divine—dissolving mystery, meaning, and the very destiny of existence. The end of humanity will coincide with the end of the divine. Technology will become theology.

At that point, we dare to imagine, something unforeseen will emerge from the unfathomable depths of mystery, restoring the order of a Supernatural Intelligence. This is the loss of faith in the penultimate things, but the trust in ultimate things. Yet in the meantime, we cannot simply stand idly by.

SHARE THIS ARTICLE

Leave a Comment

Your email address will not be published. Required fields are marked *

Read More

Weekly Magazine

Get the best articles once a week directly to your inbox!