Below are the most recent 30 comments. I try to keep it clean of comment spam, but some times things
get through and it takes me several hours to get to it. So please excuse any of that comment spam.
My Google discover traffic is literally dead. Since the start of the year and now into Feb with this 'Discover Update'. Worst start to a year since 2020. Anyone else having this nightmare? I own a UK local news site.
Interesting, so basically my words that have been scraped are not copyrightable and can be used by these scrapers at free will, presenting my words and not giving me payment (so that's ok?). Plepexity uses my words for first position followed by my competitors words and does not pay us.... so Google are saying in this lawsuit I don't have copyright over my words but they have copyright over results that contain my words... My words are also personal information and experiences...not something a fact book can make up.
Anyone else see turmoil across every channel when there is movement in organic search? For us the paid channels and everything just seems to come to a standstill. And it nearly always coincides with organic search movement. Over the last few days we’ve been bouncing around from position 3 to page 2, but our Google paid search channels have also come to a stop and are not converting.
A stupid algorithm broke the indexing on my news site. It takes 3 to 8 hours for a news item to appear in search results. If you don't submit your news for indexing to Search Console, there's no point in having a news site. And idiotic Google doesn't try to fix this problem on many websites.
It all boils down to whether Serpapi is bypassing its anti-scraping technology rather than scraping.
<a href="https://www.theregister.com/2026/02/21/serpapi_google_scraping_lawsuit/">https://www.theregister.com/2026/02/21/serpapi_google_scraping_lawsuit/</a>
Anyone in search console has indexing history only from 15.12.2025?
Got a lot of 403 when searching.
E-com Automotive -60% sales in FR last 2 days.
Total madness
Aaaand custom search api is DEAD, at least for me it's #403 forbidden. Thansk google!
"code": 403,
"message": "This project does not have the access to Custom Search JSON API."
Oh dear. We know the help Mr. Mueller would provide in this case. https://uploads.disquscdn.com/images/f77dcd885574de1076e3abdce5d5c60e6468a82c3d06c767b8d0596286a3a907.jpg
Saw a story about Google suing SerpApi for scraping their results and SerpApi's defense is it does the same as Google (scrape and gut the web) but on a much smaller scale. This short news story might be paywalled for some, but might be viewable using one of those unblocker sites.
<b>Web scraper sued by Google claims Google is the one scraping the web</b> - <a href="https://www.theverge.com/tech/882300/serpapi-google-lawsuit-web-scraper-motion-to-dismiss">https://www.theverge.com/tech/882300/serpapi-google-lawsuit-web-scraper-motion-to-dismiss</a>
<blockquote>SerpApi, a company that offers tools to scrape content on the web, is fighting back against Google’s copyright lawsuit that accuses it of vacuuming up search results “at an astonishing scale.” In a motion to dismiss filed on Friday, SerpApi argues that Google doesn’t hold a copyright on its search results, alleging that the engine is built “on the backs of others who posted ‘the world’s information.’”</blockquote>
Yeah, OK, Mueller. If I don't manually submit a recently changed or new URL to index BEFORE I mess with the GBP profile, the profile gets suspended because of the changes. Submit to index before touching the GBP (typically the time to index is less than 10 minutes), edit GBP and hardly EVER have a problem making the necessary changes to an existing location. (Unfortunately, with our websites' foldered CMS, urls change a lot once you get a few folders deep.)
If it is an issue, then remove the feature. If the feature still stands, then it is not an issue. It is probably just slowing something down on their end. I no longer have the faith I once had in Google, so trusting anything JM says comes with heavy internal debate.
<blockquote>In 2020, Google said sites that need to request manual indexing may have quality issues.</blockquote>It's 2026 now and AI bot scraping is extreme. Maybe these sites trying to force indexing are just trying to get Google to see their pages first, on their sites, before dozens of scraped versions pop up elsewhere? Whether a site is big or small, content theft is a major problem publishers are dealing with which can contribute to indexing issues if Google indexes the scraped versions first.
If i remember correctly you posted your site before and it looked like pure ai slop.. page after page with thousands of words. igaming right? stopped working? or am i confused with soemoene else?
Keep in mind they’ve lost 90% of their traffic.
I’m not sure how anyone else would manage there but a 90% traffic loss would very quickly see me in the negative every month, 10% of my income won’t cover fixed costs I can’t get out of.