LLMs trained separately develop identical internal semantic structures.

United States News News

LLMs trained separately develop identical internal semantic structures.
United States Latest News,United States Headlines
  • 📰 PsychToday
  • ⏱ Reading Time:
  • 191 sec. here
  • 5 min. at publisher
  • 📊 Quality Score:
  • News: 80%
  • Publisher: 51%

Different artificial intelligences built the same structural map of meaning—a curious clue that language might shape thought itself.

Their internal maps may reflect the math of compression, not cognition. Now imagine discovering that, deep inside, they’ve independently built similar internal maps of meaning. That’s the central finding of a.

Their internal maps may reflect the math of compression, not cognition.. Now imagine discovering that, deep inside, they’ve independently built similar internal maps of meaning. That’s the central finding of a. It feels profound, almost metaphysical. But is it? Or are we simply witnessing the mathematical constraints of how language works? The researchers used a technique called"vec2vec" to translate the internal embeddings—mathematical representations of meaning—from one LLM into another. These weren’t multimodal systems processing vision, sound, or interaction. They were monomodal, trained solely on text. And in this more narrow context, the semantic relationships encoded in one model could be reliably aligned with another without using parallel training data. This alignment suggests a shared internal structure or perhaps a kind of “semantic geometry.”Before we leap to conclusions about universal meaning, let’s pause. Language has deep statistical regularities. Words with similar meanings tend to appear in similar contexts. Any system designed to compress, predict, or represent language efficiently will, in all likelihood, converge on similar internal representations. LLMs are, fundamentally, compression systems. Just as different algorithms might represent recurring data patterns similarly—not because they uncover cosmic truth, but because that’s what compression demands—language models may naturally settle into similar shapes. That’s not mysticism; that’s math.And yet—there’s something here. These models weren’t just outputting similar text. Their internal structures could be aligned with surprising accuracy. Despite differences in training data, tokenization strategies, and optimization goals, vec2vec could translate one model’s embeddings into another’s latent space and preserve meaningful relationships. The authors refer to this as the Platonic Representation Hypothesis—the idea that all sufficiently powerful language models are discovering the same hidden geometry of meaning. Maybe it’s not that meaning is objectively real in some metaphysical sense. Maybe any system that models language deeply will be constrained by its structure.But here’s the catch: All of these models were monomodal. That is, they only processed text. Language is a narrow and highly structured slice of human. It's governed by rules, patterns, and redundancy. So in some ways, the convergence we see might be the most predictable outcome. What might be more compelling is to see what happens when there's testing for alignment across modalities. If a model trained on images and another trained on text converge on the same semantic geometry, that would suggest something far deeper—that meaning transcends representation. That it's discoverable not just through symbols, but through perception. Multimodal models like GPT-4o, Gemini, and Claude integrate vision, audio, and even physical interaction. If future studies show alignment across these more complex systems, then the argument shifts, from “language shapes geometry” to “cognition has a universal structure.” That would be a leap—from statistical inevitability to more of a structural insight. These models aren’t aware in any human sense. They don’t grasp meaning as we do. But they arrive at similar maps—not because they’re, but because language gives them little choice. Still, this convergence tells us something interesting and perhaps even important. It's that meaning, even when it’s not felt, can still be structured. And that structure, whether imposed by language or latent in the world, is starting to show its shape.Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

PsychToday /  🏆 714. in US

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

LLMs as cognitive archaeologists, excavating the lost civilizations of thought.LLMs as cognitive archaeologists, excavating the lost civilizations of thought.AI preserves human thought like Pompeii’s ashes—brilliant ruins we must excavate very carefully.
Read more »

Alibaba launches new Qwen LLMs in China's latest open-source AI breakthroughAlibaba launches new Qwen LLMs in China's latest open-source AI breakthroughAlibaba has launched Qwen3, its next generation of AI models in what experts called the latest breakthrough in China’s booming open-source AI space.
Read more »

Different countries are flourishing in different ways.Different countries are flourishing in different ways.The pursuit of worldwide well-being.
Read more »

How World Models Are Radically Reshaping The Future Of Generative AI And LLMsHow World Models Are Radically Reshaping The Future Of Generative AI And LLMsWorld models are fast becoming popular to aid in further training of generative AI and large language models (LLMs). Doing so boosts AI. Here's the inside scoop.
Read more »

A Business Leader’s Guide To Generative Engine Optimization (GEO)A Business Leader’s Guide To Generative Engine Optimization (GEO)A new kind of marketing contemplates how to get the attention of LLMs.
Read more »

LLMs like ChatGPT may be a good substitute friend for discussing relationship issues.LLMs like ChatGPT may be a good substitute friend for discussing relationship issues.Talking through relationship problems with a friend is a great way to get perspective, but if friends aren't available, can an LLM such as ChatGPT be an effective substitute?
Read more »



Render Time: 2026-04-01 17:33:26