Google Gemini Takes Center Stage: Extensions, Multimodal Conversations, and Deep Research

Technology News

Google Gemini Takes Center Stage: Extensions, Multimodal Conversations, and Deep Research
AIGeminiGoogle Assistant
  • 📰 DigitalTrends
  • ⏱ Reading Time:
  • 104 sec. here
  • 11 min. at publisher
  • 📊 Quality Score:
  • News: 73%
  • Publisher: 65%

Google is pushing its Gemini AI forward with exciting new features that blur the lines between apps and assistants. Extensions allow Gemini to interact with other apps directly, while multimodal conversations bring images, videos, and files to the mix. Deep Research offers a focused search experience free from hallucinations.

At its first Unpacked event of the year, Samsung talked extensively about the Gemini -driven AI capabilities on their phones. And for a moment, I was excited, until I came across a Google press release. Of the two major features discussed on stage, one of them is already coming to Pixels, and the other one will be available on iPhones, Androids, and the desktop web. Google just announced a host of changes for its Gemini assistant that truly push it in the agentic AI era.

Let’s start with extensions, which essentially require an “@” system (a la Workspace) in the chats to get a task done in the target app. So, let’s say I tell Gemini “Say hi to Christina @WhatsApp,” it would send a text in the messaging app. For Google apps, you can even skip the extensions, as the AI will contextually default to the appropriate services such as Messages or Map.Now, Gemini will be able to use multiple extensions in a single prompt. So, a command like “find me the nearest kebab house and send it to Drew” will get the job done across Maps and Messages. This capability is already rolling across the Pixel 9 series smartphones. If you own a Samsung Galaxy S25 series phone, Gemini will dip into the data stored across Samsung’s own services, such as Calendar and Notes, as well. Multi-extension support will eventually make its way to iOS, Android, and the web version of Gemini, as well. Gemini Live, the conversational side of Gemini is also getting a major overhaul. So far, it has only been able to engage in audio conversations with users. Now, users can also add images, YouTube videos, and local files to their conversations with Gemini Live. So far, this capability has been limited to text interactions with Gemini, or within the experimental NotebookLM tool. This feature will appear on the Galaxy S24, Galaxy S25, and Google Pixel 9 series phones starting today. In the coming weeks, screen sharing and live streaming capabilities will also make their way to Gemini. This is finally going to bring Google’s ambitious Project Astra system to smartphones. Another fantastic update is the arrival of Deep Research in the Gemini mobile. Deep Research is the most practically rewarding Gemini product by a far margin, as it reimagines the entire internet search experience and presents information only from specified sources. There are no risks of hallucinations, or the added stress of fact-checking, as long as you trust the sources of your information. But do keep in mind that you need a Gemini Advanced subscription to access it

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

DigitalTrends /  🏆 95. in US

AI Gemini Google Assistant Extensions Multimodal Deep Research Project Astra Smartphone

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Gemini AI Assistant Coming Soon to Android AutoGemini AI Assistant Coming Soon to Android AutoGoogle's AI-powered assistant, Gemini, is on its way to Android Auto, according to recent reports and findings. Code within the Android Auto v13.5 beta APK suggests that Gemini is being tested within the platform, potentially replacing the Google Assistant microphone icon with a Gemini-branded symbol. While functionality is yet to be fully realized, this integration hints at a significant advancement for in-car digital assistants. This move aligns with Google's broader strategy of integrating Gemini across its ecosystem, including Google TV and Wear OS smartwatches.
Read more »

Samsung Galaxy S25 Series Deep Dives into AI with Gemini Integration and Now BarSamsung Galaxy S25 Series Deep Dives into AI with Gemini Integration and Now BarThe Samsung Galaxy S25 series has launched, embracing AI features more deeply than ever before. Powered by the new Snapdragon 8 Elite chip and featuring up to 200-megapixel cameras, the phones offer compelling upgrades. The standout addition is the integration of Google's Gemini AI, bringing contextual awareness and personalized assistance to a new level. The Now Bar, inspired by Apple's Dynamic Island, provides timely information and suggestions throughout the day, making the Galaxy S25 series a true contender in the AI-powered smartphone race.
Read more »

Hackers Exploit Google Search to Push Malicious Chrome ExtensionsHackers Exploit Google Search to Push Malicious Chrome ExtensionsSecurity researchers have uncovered how hackers are manipulating Google's search algorithms to push malicious Chrome extensions to the top of search results, exposing millions of users to potential threats.
Read more »

OpenAI's New Reasoning AI Model o3 Outperforms Google's Gemini 2.0OpenAI's New Reasoning AI Model o3 Outperforms Google's Gemini 2.0OpenAI unveils its enhanced reasoning AI model, o3, demonstrating superior performance compared to Google's Gemini 2.0 Flash Thinking. Both models are designed to tackle complex problems requiring logical reasoning.
Read more »

OpenAI's o3 vs. Google's Gemini 2.0 Flash Thinking: The AI Reasoning Race Heats UpOpenAI's o3 vs. Google's Gemini 2.0 Flash Thinking: The AI Reasoning Race Heats UpOpenAI and Google are locked in a battle to develop the most advanced AI reasoning models. OpenAI's new o3 model surpasses its predecessor in complex problem-solving, while Google's Gemini 2.0 Flash Thinking demonstrates impressive agentic abilities.
Read more »

OpenAI’s o3 Model Outperforms Google’s Gemini 2.0 in Reasoning TestsOpenAI’s o3 Model Outperforms Google’s Gemini 2.0 in Reasoning TestsOpenAI unveils its latest AI model, o3, demonstrating superior reasoning capabilities compared to Google’s Gemini 2.0 Flash Thinking. The new model excels in complex coding, math, and science tasks, achieving significantly higher scores on benchmark tests.
Read more »



Render Time: 2025-02-16 06:32:26