Google Integrates Core Search Signals Into AI Experiences

Jan 12, 2026 - 7:51 am 1 by
Filed Under Google

Google Search Ai Integration

Google's Robby Stein reiterated that Google's AI experience, such as AI Mode and AI Overviews, integrates Google's core search signals. This helps Google provide more useful and helpful responses to users in the AI experiences. But it doesn't always get it right, as many of you know.

Robby Stein said this again on the CNN, Terms of Service podcast with Clare Duffy, which we covered last week about Google AI personalization.

Robby said, "And I think one thing we try to do is we've integrated all of the core search kind of signals into these AI experiences."

When the AI answer confuses or mixes up information, Google calls that a "loss," Stein said.

I mean, this is not new, Google said this when they announced SGE, the Search Generative Experience, now known as AI Overviews back in May 2023 - where I wrote:

Yes, it uses Google Search's core ranking systems for this purpose. Google said this helps them "significantly mitigate" some of the known limitations of LLMs, like hallucination or inaccuracies.

Stein went on to say, "And so when it's kind of recommending or giving you Overviews, it's using information and linking to information that others have found very helpful for that question in the past. And it's possible that there's all kinds of reasons why it can make small mistakes. It's one of the reasons why also I think we want to be making sure people can go and click in and see information themselves as well."

Here is the full transcript of this section that starts at 12:43 mark into the video:

Robby Stein: And I think one thing we try to do is we've integrated all of the core search kind of signals into these AI experiences. And so when it's kind of recommending or giving you Overviews, it's using information and linking to information that others have found very helpful for that question in the past. And it's possible that there's all kinds of reasons why it can make small mistakes. It's one of the reasons why also I think we want to be making sure people can go and click in and see information themselves as well.

Clare Duffy: Talk a little bit more about what that training looks like on the back end. Is it just giving it more data to work with? Is it the sort of like reinforcement training where you're saying, "No, this was a bad answer. You should have done this instead." Like how does that…

Robby Stein: Kind of all of those things? I think we would consider what you're saying a loss. And so we would say like there's a loss pattern. And so we take the look at the link, we'd say, "Why was it linking to that?" And then we would understand. And usually what happens is there's a few reasons why simple things like that could happen. One could be maybe there was a link that referenced the other brand on the page, right? And so maybe that's one thing. You know there could be an instance where there's a review that someone mentions that's highly used and then the actual site itself has they have conflicting information. Like a user uses the product says one thing and maybe the product itself like the company that makes products as another thing. So what do you do? Like information is very complex, but Google's been working on this for 25 years. And so I think that, you know, we do some of the best work in this space and have studied, you know, what information is trustworthy, what's helpful, what links are useful for people, what's not, what's spam information and and problematic versus not. And so I think in the large vast vast majority, people are getting exceptionally helpful and high quality responses overall. But of course they can make mistakes.

Clare Duffy: How are you catching if and when those mistakes show up? How does Google know? How do you work on fixing them?

Robby Stein: Yeah, I mean we have deep evaluation metrics where we will run, you know, numerous types of questions through the system and and check them all. It's this relentless focus on improving them and fixing them and making it better every single day. And then when you fix one of those things, you fix a whole part of the system. And then what happens is a month later, a few months later, it just gets better and better and better. And obviously the models are also getting more sophisticated and now very advanced reasoning models are introduced. So this is a very different generation of model than when we first launched the the very first AI Overview. And this is a model that's increasingly capable of reasoning, thinking through things, verifying its work, checking. And you're going to you're seeing much more of that in our AI systems now.

Clare Duffy: And are you looking for user feedback too? Like if I ask a question and something's wrong, like is there a button?

Robby Stein: Yes, there's a thumbs up and thumbs down. You can report issues. We look at them religiously. Do a thumbs down if you see an issue report it we will my team will look at it we will look at that and every single one of those is taken seriously and we have systems that that look at user feedback at scale so Google.

Forum discussion at X.

 

Popular Categories

The Pulse of the search community

Search Video Recaps

 
Video Details More Videos Subscribe to Videos

Most Recent Articles

Search Forum Recap

Daily Search Forum Recap: January 12, 2026

Jan 12, 2026 - 10:00 am
Google

Google Integrates Core Search Signals Into AI Experiences

Jan 12, 2026 - 7:51 am
Google Ads

Google AI Mode Direct Offers Ads & Universal Commerce Protocol

Jan 12, 2026 - 7:41 am
Google Ads

Google Ads Performance Max A/B Testing Assets

Jan 12, 2026 - 7:31 am
Google Ads

Google Merchant Center Business Agent - Branded AI Agents

Jan 12, 2026 - 7:21 am
Google Ads

Google Ads Brand Linking Experiment

Jan 12, 2026 - 7:11 am
 
Previous Story: Google AI Mode Direct Offers Ads & Universal Commerce Protocol