The next waves of AI will have more dimension. Physical AI, spatial computing and LLMs are converging to generate the next wave of virtual reality.
VR circa 2016. Facebook founder and chief Mark Zuckerberg publisher Springer CEO Mathias Doepfner , publisher Friede Springer and other guests try virtual reality devices at the Axel Springer Award in Berlin, 2016.
The next really big thing is almost upon us, and most people haven’t noticed. After our three-year LLM obsession, the wider horizons of AI and robotics are starting to percolate beyond the lab. They’ll eventually transform Virtual Reality .Over the next three to five years, VR will reemerge transformed by the convergence of Generative AI, Physical AI and spatial computing. VR, back and vindicated.In late 2022 I spoke at Estonia’s Black Night Film Festival. These are edgy, leading-edge masters of film production. I predicted that within a decade—we didn’t know when—technology would enable us to generate video on demand, in real time, for an audience of one. Eventually, entire lived experiences would emerge real time in VR. These artists and technicians found the vision inspiring—and terrifying—but it wasn’t yet possible. Image from the mainstage of the Black Nights Film Festival , hosted in Tallinn, Estonia each year in November since 1996.Until it was. November 30, ChatGPT launched. I had no idea LLMs had become so good, so fast, though I had been watching weak signals for years, including signals from a nonprofit lab called OpenAI.are moving beyond LLMs. As LeCun—who will step down as Meta’s Chief AI Scientist at the end of 2025—asserts, “LLMs are not a path to human-level AI.” They and others are doubling down on “Physical AI”—the fusion of perception, reasoning and control in 3D space that lets machines act autonomously in the physical world—and on spatial computing, which uses mapping, tracking and 3D representations to give computers and humans shared visions of physical environments. Physical AI enables action; spatial computing provides spatial frames of reference in which action occurs. Physical AI and spatial computing are combining with the next wave of VR, what some call VR+, including AR and mixed reality. Not the metaverse of empty virtual Walmarts or $3,000 headsets, but something far more compelling.This past week, a cryptic deal between unexpected partners—healthcare and generative AI—signaled this shift. Few noticed. Even fewer understood the implications. On November 17, Butterfly Network , the handheld-ultrasound pioneer, announced a five-year co-development and licensing agreement with Midjourney, one of the most influential AI image-generation labs. Midjourney will pay Butterfly a $15 million upfront fee and $10 million annually for access to its ultrasound-on-chip platform and software—plus milestone payments and revenue-sharing tied to future hardware.Butterfly originally designed their chip to collapse a cart-sized imaging machine into a handheld probe. Their system is a spatial sensor, not just a medical component. Midjourney is licensing the platform for next-generation sensing and spatial understanding. This deal marks Butterfly’s shift to a sensing platform for AI at the edge.Butterfly’s ultrasound-on-chip adds depth, motion and sensing that cameras alone can’t achieve, expanding spatial computing’s perception, eventually enabling richer VR environments.referenced in internal 2024 “Office Hours” commentary—"holodeck-like” worlds. Within that arc, the Butterfly–Midjourney partnership represents an early step toward VR systems that can perceive the world, generate environments and respond in real time. David Holz speaks onstage during the 2013 SXSW Music, Film + Interactive Festival on March 9, 2013 in Austin, Texas. Holz would eventually found Midjourney in 2021. Midjourney hasn’t disclosed what exactly they’re developing. My speculation: whatever launches will integrate acoustic, visual and other sensors to blend people, AI agents and objects seamlessly into VR environments. Perception, intelligence and experiences at the VR edge.Meta has reported nearly $70 billion in accumulated losses in Reality Labs since Mark Zuckerberg founded the lab in 2021. Apple’s Vision Pro arrived with fanfare and a $3,499 price tag but lacked content and mass adoption. Google launched and later abandoned its Daydream VR platform.Fortunately, Meta, Apple and Google can afford to lose billions. Their investments catalyzed R&D and inspired entrepreneurs and researchers further beyond. The past few years delivered some missing ingredients. Midjourney and OpenAI’s Sora now create photorealistic scenes increasingly aligned with physical laws. Spatial mapping continues to improve. Edge AI chips deliver real-time inference and—like Butterfly’s—low-power physical sensing in small packages. Meanwhile, Figure and Tesla are building general-purpose humanoid robots that learn multi-step tasks from large-scale models rather than scripts. NVIDIA’s Omniverse provides simulation fabric to train on millions of scenarios before touching a home or factory floor. These systems share a pattern: tight loops between sensing, world models and action.better VR. No one knows how long this will take, but the consensus among experts I’ve interviewed is thatFuture VR interfaces might include headsets, room-scale installations, smart glasses or as-yet-unimagined new form factors. Eventually, interfaces might disappear through brain-computer interfaces now under development by companies like Neuralink, Synchron and Precision Neuroscience., explains why he believes we’ll see invisible interfaces in our lifetimes. “Making BCIs work poses wicked engineering challenges. Humans have ways of solving those.” Add our AI partners, and we’re off to the VR races. Freed from face-mounted screens and no longer limited to pre-rendered worlds, VR systems will perceive your physical space and immerse you in responsive, generative environments—humans and robots included.Yann LeCun’s next act signals that AI must grow a body, not just better autocomplete. AfThe next really big thing is almost upon us, and most people haven’t noticed. After our three-year LLM obsession, the wider horizons of AI and robotics are starting to achieve recognition beyond the lab. Fei-Fei Li’s new company, World Labs, describes itself as “a spatial intelligence company, building frontier models that can perceive, generate and interact with the 3D world.” Its first product, Marble, turns text, photos and video into persistent, editable 3D environments. Li argues, “world models must be able to generate worlds consistent in perception, geometry and physics,” with GenAI maintaining spatial coherence over time. Synthesize LeCun’s Physical AI with Li’s spatial intelligence and VR, and the future becomes visible. Moreover--I’ll leave this to you, the reader--take a look atLONDON, ENGLAND - NOVEMBER 5: King Charles III poses for a group photo with the recipients of the 2025 Queen Elizabeth Prize for Engineering to Professor Yoshua Bengio, Dr. Yann LeCun, Professor Geoffrey Hinton, Jensen Huang, Dr. Fei-Fei Li, Dr. Bill Dally and Professor John Hopfield, for their contributions to the development of modern machine learning in the field of Artificial Intelligence, during a reception for the 2025 Queen Elizabeth Prize for Engineering, at St James' Palace November 5, 2025 in London, England. combining visible light, infrared, ultrasound and more, enabling editable models of the world--and they’ll generate new worlds which we’ll access via VR platforms to come. Business leaders should begin experimenting now with spatial computing and generative world models. The organizations that learn fastest will help shape this new medium. The coming metaverse won’t be cartoon avatars in empty virtual Walmarts. It will be collaborative, lived environments that you understand—and that understand you. Inspiring—and terrifying.
VR Spatial Computing Physical AI META BFLY Midjourney
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Florida State to retain coach Mike Norvell for 2026 seasonFlorida State football coach Mike Norvell, who is 7-16 (3-13 in ACC play) over the past two seasons, will return for a seventh year with the Seminoles.
Read more »
USMNT Star Suffers Brutal Injury Setback, Likely Ruled Out of 2026 World CupMauricio Pochettino must prepare to have one less defender to call upon in next summer’s tournament.
Read more »
5 ChatGPT Prompts To AI-Proof Your Career In 2026Use these ChatGPT prompts to understand AI’s impact, strengthen your skills, and stay competitive. Start here to AI-proof your career for 2026.
Read more »
Paul Pogba Rates Chances of Playing With France at 2026 World CupThe Frenchman returned to the pitch on Saturday after 811 days.
Read more »
Former 4-star Penn State 2026 commit flips to SEC powerThe mass exodus from the 2026 recruiting cycle continued Sunday.
Read more »
WNBA mock draft 2026: New No. 1 replaces UCLA's Betts at topFour UCLA players, including Lauren Betts, are now projected first-round picks. But Spain's Awa Fam is No. 1.
Read more »
