In the emerging age of algorithmic diplomacy, datasets are becoming the real instruments of power.
In the 20th century, geopolitical leverage flowed through oil pipelines, shipping routes, and chokepoints guarded by naval power. Today, a quieter but no less consequential infrastructure is emerging: one built not from steel and concrete, but from data.
As AI systems increasingly determine what information is generated, ranked, translated, filtered, and amplified, the politics of training data are becoming inseparable from global power dynamics. At the center of this transformation is a concept that is still taking shape but already reshaping the contours of international relations: algorithmic diplomacy. “When AI systems decide what the world sees, reads, and believes – who decides what feeds the algorithm?” asks Nikos Panagiotou, researcher and professor at the Aristotle University of Thessaloniki in Greece, framing what may be one of the defining questions of the decade. Panagiotou presents a straightforward argument: control over AI is not just about building better models, but about controlling what those models learn from.For much of the past decade, the AI race has been framed as a competition over compute power and model sophistication. But Panagiotou argues that this framing misses a deeper layer of strategic importance. “What particularly captured my attention is that this technology is becoming part of a global competition – not only over who will develop the most advanced algorithms, but who will feed these AI models and based on what they will be trained,” he tellsThat distinction is critical, since algorithms can be replicated, optimized, and iterated upon. Data, especially high-quality, large-scale, culturally embedded data, is far harder to reproduce. “If they are trained through a particular set of data, what are the implications in global affairs and international affairs? And most importantly, how can countries influence this course of events?” he continues, The answer increasingly points to a new layer of geopolitical strategy: countries are beginning to treat datasets as strategic assets, not unlike rare earth elements or semiconductor supply chains. Hence, the global AI landscape is already fragmenting into what Panagiotou describes as competing data blocs: ecosystems shaped by political systems, regulatory frameworks, and cultural priorities. “Yes, exactly,” he says when asked whether geopolitical alliances are mirrored in AI training ecosystems. “But they will be competing.” At the highest level, three pillars define influence in this new domain: computational power, skilled personnel, and investment capital. The US leads in private-sectorYet access is uneven, and many countries outside these major power centers face structural disadvantages. “If you lack the computational power, if you lack the necessary skills, and if you lack the necessary resources, you might end up being someone who only observes what is taking place, without being able to define these elements,” he tellsThe EU’s regulatory push, particularly through its AI governance framework, highlights a broader tension: how to balance ethical oversight with competitiveness. Panagiotou sees this as a potential strategic vulnerability. “The EU is facing a crucial dilemma: whether or not it will impose restrictions. Then I think that a competitive advantage lies with countries that don’t have these restrictions,” he explains. Meanwhile, the US and China continue to advance rapidly, often with fewer constraints on data usage and model deployment. But the issue is not limited to states, as corporate actors are increasingly central to this ecosystem.between Palantir and the UK, where access to health system data is granted. This means that personal data, to a certain extent, will be owned by a company.” At the same time, legal battles – such as those involving major media organizations and AI developers – underscore the contested nature of training data itself. “We can understand what is happening with smaller media organizations, whose content is being exploited to train these models.”For AI ethics and governance expert Kate O’Neill, the concept of algorithmic diplomacy represents a fundamental shift away from traditional frameworks. “Digital diplomacy still assumes human intent and accountability. Information warfare is oriented toward manipulation. But algorithmic diplomacy suggests that the logic and incentives embedded in algorithmic systems can shape cross-borderIn other words, influence is no longer just about messaging, but about the systems that mediate reality itself. However, O’Neill cautions against viewing this as a fully autonomous process. “In practice, it is often human intent laundered through systems, plus human reliance on system outputs. More people are turning to AI systems as a first step for education, translation, research, and framing. What they take from those systems can steer what happens next,” she says. This creates a powerful feedback loop, where data shapes outputs, outputs shape behavior, and behavior feeds back into future data.At the heart of algorithmic diplomacy lies dataset curation – the seemingly technical process of selecting, filtering, and structuring training“The moment a dataset determines who and what is represented or erased, what histories are centered, which languages are ‘normal,’ and which sources are authoritative, you are no longer just improving model quality. You are shaping a worldview,” she cautions.O’Neill frames this influence as operating across four reinforcing layers: the dataset itself, how the model is used, how users interact with outputs, and the iterative feedback loops that compound influence over time. “This is why algorithmic diplomacy is ultimately about feedback loops. Between training data, system tuning, platform incentives, and human behavior,” she points out. “Algorithmic diplomacy functions upstream, before data becomes code, to determine how that data is subsequently used to influence decisions made by algorithms.” Tian, CEO of AI detection software GPTZero, tellsThis upstream positioning is critical. It means that influence is embedded not just in outputs, but in the very definition of what constitutes truth within a system. “You are not just trying to modify the narrative. You are defining what is considered the ‘truth’ to the algorithm.” Tian says. This has measurable consequences. “We have empirical evidence that simply changing the data source, or filtering, can produce radically different outputs,” he notes.“Models can be replicated or fine-tuned,” Tian explains. “But creating high-quality, well-curated, proprietary datasets is significantly more challenging.” The implication is clear: the next phase of AI competition will not be won by better algorithms alone, but by those who control the underlying data.Beneath the abstract discussions of data and algorithms lies a very physical reality: AI requires infrastructure – massive data centers, energy resources, and supply chains. Panagiotou highlights an often-overlooked consequence of the AI boom.“AI brought another revolution in the energy field. Data centers require a lot of resources, and this has regenerated the discussion about nuclear power,” he tellsThis introduces a new layer of societal and political tension, as building the infrastructure needed for advanced AI systems may face resistance from communities concerned about environmental impact and energy consumption., societies lose critical infrastructure. But at the same time, there may be reactions against their development,” he warns, adding that the result is a complex trade-off between technological competitiveness and societal acceptance. Despite the growing risks, meaningful international cooperation to mitigate such threats remains elusive. “We are witnessing a new field of antagonism – no one is ready to cooperate, because it feels that it will immediately provide an advantage to the other country,” Panagiotou says, drawing a parallel to Cold War nuclear diplomacy, suggesting that a modern equivalent may eventually be necessary. “I argue that we need a new type of agreement, similar to the SALT treaty, to set limits in the development of AI,” he explains., AI systems are not just tools of destruction – they are tools of knowledge production, capable of shaping how societies understand reality itself.Having all of this in mind – China’s controlled data environment, Europe’s regulatory approach, and the US’ market-driven model are already producing different AI behaviors and outputs. At the same time, cross-border data flows remain porous, creating asymmetries that further complicate the landscape. This asymmetry showcases a broader reality: in the absence of shared norms, data becomes both a resource and a vulnerability. Ultimately, the rise of algorithmic diplomacy also means a deeper transformation in how power is exercised. Thus, datasets, once considered a technical input, are becoming the new contested terrain. And as Tian puts it, the stakes could not be higher: “These datasets will not only define the behavior of models, but will introduce biases, create worldviews, and do so at a global scale.”Bojan Stojkovski is a freelance journalist based in Skopje, North Macedonia, covering foreign policy and technology for more than a decade. His work has appeared in Foreign Policy, ZDNet, and Nature.AI and RoboticsInterviewsAI and Robotics
AI Algorithms Algorithms Datasets Defense &Amp Military Diplomacy Energy &Amp Environment Geopolitics
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Service halted on Metra UP-West line after train hits vehicle: PoliceTrains in both directions have been halted on Metra’s UP-West line after a train struck a vehicle near suburban Berkeley.
Read more »
Three Sunday train strikes add to growing rail safety concerns in Bexar CountySAN ANTONIO - A string of incidents along San Antonio-area railways on Sunday, involving multiple people being hit, is adding to what safety advocates describe
Read more »
3 Surprising Ways You Helped Train AI Without Even KnowingChifundo is a tech enthusiast who discovered his love for writing in 2015 when he was tasked with creating guides for a financial system at a local software development company. He left the company that year to become a full-time freelance writer.
Read more »
Carpenters Local Union 276 train next generationStudents from local schools and adult education programs are getting a chance to explore career opportunities in carpentry.
Read more »
WATCH: AI Takes the Mound — Mariners train against virtual big-leaguersAaron Granillo is a reporter for KIRO Newsradio. He grew up in Seattle and studied broadcast journalism at Arizona State University in Phoenix. Aaron started his news-talk career at KTAR before becoming a morning host and anchor for KNAU, Arizona Public Radio in Flagstaff. His story about a Native American chef won a 2017 National Edward R.
Read more »
Zendaya’s Backless Bridal Dress Is Elevated With a Dramatic TrainZendaya talks about her OG style.
Read more »
