Smaller AI Models: A Tipping Point for Ubiquitous AI

Technology News

Smaller AI Models: A Tipping Point for Ubiquitous AI
AION-DEVICE AIQUALCOMM
  • 📰 ForbesTech
  • ⏱ Reading Time:
  • 228 sec. here
  • 11 min. at publisher
  • 📊 Quality Score:
  • News: 116%
  • Publisher: 59%

The emergence of smaller, more efficient AI models like DeepSeek is poised to revolutionize on-device AI applications. These models, capable of running directly on smartphones, PCs, and other devices, offer superior performance and efficiency compared to larger cloud-based models. Qualcomm, a key player in this space, stands to benefit significantly as these models find widespread adoption across various markets, including automotive, robotics, and VR headsets.

While the Deep Seek Moment crashed most semiconductor stocks as investors feared lower demand for data center AI chips, these new, smaller AI models are just the ticket for on-device AI. “DeepSeek R1 and other similar models recently demonstrated that AI models are developing faster, becoming smaller, more capable and efficient, and now able to run directly on device,” said Qualcomm CEO Cristiano Amon at the company’s recent earnings call.

And within less than a week, DeepSeek R1-distilled models were running on Qualcomm Snapdragon-powered PCs and smartphones. While both Apple and Qualcomm will benefit from these new models, Qualcomm can quickly apply these models beyond smart phones; the company has strong positions in other markets such as automotive, robotics, and VR headsets, as well as the company’s emerging PC business. All these markets will benefit from the new smaller models and the applications built on them. Apple is famous for its beautiful fully integrated designs, but Qualcomm partners with others to design and build the final product, speeding time to market and enabling broader adoption. For example, Qualcomm Snapdragon chips power both Meta Quest and Rayban headsets, which enjoy over 70% market share.Qualcomm and Apple have both been working hard to reduce model size through lower precision math and model optimization techniques such as pruning and sparsity. Now, with distillation, we are seeing step-function improvement in the quality, performance, and efficiency of AI models that can now run on device. And these smaller models do not demand users to compromise. These new state-of-the-art smaller AI models have superior performance thanks to techniques like model distillation and novel AI network architectures, which simplify the development process without sacrificing quality. These smaller models can outperform larger ones which really only operate in the cloud.Here’s Where Trump’s Government Layoffs Are Happening—As 200,000 Recent Hires Could Be Affected Here Are All The Major Lawsuits Against Trump And Musk: Judge Orders Temporary Restoration Of Foreign Aid Funding In addition, the size of models continues to decrease rapidly. State-of-the-art quantization and pruning techniques allow developers to reduce the size of models with no material drop in accuracy. Moreover, Qualcomm believes that AI is becoming the new user interface thanks to the emerging trends in AI agents. Personalized multimodal agents will simplify interactions and proficiently complete tasks across various applications. The table below shows that the distilled versions of both the DeepSeek Qwen and Meta Llama models perform as well or better than the larger and more expensive state of the art models from OpenAI and Mistral. The GPQA Diamond benchmark is particularly interesting, as that model involves deep, multi-step reasoning to solve complex queries, which many models find challenging.The market skepticism around on-device AI is fading fast. Here is an example use case that Qualcomm has provided. Imagine you are driving along and one of your passengers mentions coffee. An LLM agent hears this and suggests a place along the route where you can stop and grab a cup. Since the local driving LLM and ADAS systems are local, a cloud-based AI cannot perform this task. This is but one example of how agents will transform AI and are especially useful on-device.Not in the least. In fact, we would say that these new models are a tipping point for ubiquitous AI. Smaller, more efficient, and accurate AI models are key to helping make AI pervasive and affordable. Consequently, techniques demonstrated by DeepSeek are already being applied by the mainstream AI companies to keep them competitive and avoid the pitfalls of censorship and security that DeepSeek presents. And Qualcomm is perhaps the biggest winner in this evolution of models towards affordable AI that fits and runs well on the devices that already number in the billions. Disclosures: This article expresses the opinions of the author and is not to be taken as advice to purchase from or invest in the companies mentioned. My firm, Cambrian-AI Research, is fortunate to have many semiconductor firms as our clients, including BrainChip, Cadence, Cerebras Systems, D-Matrix, Esperanto, Groq, IBM, Intel, Micron, NVIDIA, Qualcomm, Graphcore, SImA.ai, Synopsys, Tenstorrent, Ventana Microsystems, and scores of investors. I have no investment positions in any of the companies mentioned in this article. For more information, please visit our website atOur community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.Insults, profanity, incoherent, obscene or inflammatory language or threats of any kindContinuous attempts to re-post comments that have been previously moderated/rejectedAttempts or tactics that put the site security at riskProtect your community.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

ForbesTech /  🏆 318. in US

AI ON-DEVICE AI QUALCOMM DEEPSEEK MODEL Distillation AI AGENS

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

American Restaurant Tipping Rates Decline Amid 'Tipping Fatigue'American Restaurant Tipping Rates Decline Amid 'Tipping Fatigue'Recent data reveals a downward trend in tipping amounts at American restaurants. The average tip percentage has dropped from 19% in 2022 to 18.8% in the third quarter of 2024. Experts attribute this decline to 'tipping fatigue,' with consumers expressing discomfort over the pressure to tip in automated systems and the increasing expectation of tips in various service settings.
Read more »

Tipping Fatigue: American Restaurant-Goers Leave Smaller TipsTipping Fatigue: American Restaurant-Goers Leave Smaller TipsA recent study by Toast reveals a decline in average tipping rates in American restaurants. Both full-service and quick-service restaurants saw decreases compared to previous years, potentially due to 'tipping fatigue' where consumers feel pressured and overwhelmed by the constant expectation to tip.
Read more »

Tipping Fatigue: American Restaurant-Goers Leaving Smaller TipsTipping Fatigue: American Restaurant-Goers Leaving Smaller TipsA recent study by Toast revealed that the average tipping rate for restaurants in the third quarter of 2024 has dropped to 18.8%, down from 19% in both 2022 and 2021. This trend is attributed to a phenomenon known as 'tipping fatigue', where consumers are increasingly resistant to tipping pressures from automated systems and high menu prices.
Read more »

American Tipping Averages Decline: Is 'Tipping Fatigue' Setting In?American Tipping Averages Decline: Is 'Tipping Fatigue' Setting In?Recent data reveals a noticeable decrease in tipping averages across American restaurants. Experts point to 'tipping fatigue' as a primary driver, alongside rising inflation and evolving consumer perceptions of tipping culture.
Read more »

American Tipping Habits Decline Amid 'Tipping Fatigue'American Tipping Habits Decline Amid 'Tipping Fatigue'Recent data indicates a significant decrease in tipping amounts left by American restaurant-goers, attributed to 'tipping fatigue' and rising inflation.
Read more »

Here’s How Big LLMs Teach Smaller AI Models Via Leveraging Knowledge DistillationHere’s How Big LLMs Teach Smaller AI Models Via Leveraging Knowledge DistillationAI-driven knowledge distillation is gaining attention. LLMs are teaching SLMs. Expect this trend to increase. Here's the insider scoop.
Read more »



Render Time: 2026-04-01 17:55:17