Google updates Discover, and Bing measures citations in AI – SEO News – #1 – February 2026
d-tags
d-tags
On February 5, 2026, Google began rolling out a broad update to the systems responsible for content recommendations in Google Discover. The change is global in scope (although it initially covers English‑language users in the United States) and focuses on improving the quality and relevance of presented materials.
According to internal testing, after the update users rate Discover as a more useful source of content aligned with their interests.
The new version of the recommendation system introduces several important improvements:
Google emphasizes that algorithms analyze the entire website to assess its level of specialization. This means both niche websites and large multi‑topic portals can build Discover visibility — provided they consistently develop expert content in specific areas.
As with other major Google updates, the rollout may cause noticeable changes in Discover traffic. Some sites will see increases, others may experience declines, and many websites will not notice significant differences.
Google plans to gradually expand the update to additional countries and languages in the coming months. During this time, site owners should review the quality of their content, its freshness, and topical consistency, following general core update and Discover guidelines.
Microsoft launched a beta version of the new AI Performance feature in Bing Webmaster Tools. The function allows publishers to check how and how often their content is used as a source in AI‑generated responses — including Microsoft Copilot, AI summaries in Bing, and selected partner integrations.
The new panel focuses on citation analysis rather than traditional metrics such as clicks or rankings. Available data includes:
Microsoft notes the report does not provide ranking positions, impact on the answer, or relation to traffic or conversions. The data concerns only how frequently content is cited in AI environments.
Although it does not show clicks or real business impact, the report gives publishers insight into whether their content is used as a knowledge source in generative systems.
Google introduced changes to technical documentation regarding Googlebot, clarifying file size limits analyzed during crawling. The update is organizational — general limits were moved from the Googlebot page to documentation covering the entire crawler and fetcher infrastructure.
The reason is simple: many Google products now use the same infrastructure, not only Search. At the same time, documentation for Googlebot itself was expanded with more specific data related to Google Search.
By default, Google indexing and fetching robots only index the first 15 MB of a file, and any content beyond this limit is ignored. Individual projects may set different limits for their crawlers and fetchers and for different file types. For example, a Google robot such as Googlebot may have a smaller size limit (e.g., 2 MB) or define a larger limit for PDF files than for HTML files.
The previous version mentioned a general 15 MB limit and the possibility of varying thresholds depending on project or file type. It did not clearly state that a specific robot — for example the one used in Search — may have a much lower HTML limit (e.g., 2 MB).
The new wording removes this ambiguity and confirms that limits may differ not only between file types (HTML vs. PDF) but also depending on the robot’s specific use.
Google is running further experiments in the AI Overviews section of search results. For about a week, a new format for presenting sources has been visible — hovering over the link icon displays an overlay card with the page or sites from which a fragment of text originates.
The element aims to more clearly indicate the source of a specific sentence or paragraph generated in the AI summary. Instead of a subtle link indicator, the user receives a visual card with additional page information.
Early observations appeared on X, and similar screenshots were later published by other users, suggesting the test covers a wider audience.
It is unclear whether the change is an extension of earlier contextual link experiments in AI Overviews, but the direction seems consistent — Google is looking for ways to increase the visibility and attractiveness of cited sources in AI‑generated answers.
The open question remains: will the expanded cards actually translate into higher click‑through rates and real traffic to websites? Google has not provided data confirming the effectiveness of the tested solution.
A Reddit question asked whether a site operating for only a year can realistically outrank a competitor present for four years. John Mueller from Google replied in his characteristic style — “it depends”.
Mueller emphasized that a site’s age alone does not guarantee an advantage. What matters is what happened during that time. Four years of active development, building valuable content, user relationships, and brand recognition is very different from four years of mere existence.
Conversely, a one‑year‑old site that has intensively worked on content quality, user experience, and promotion can build its position faster than an older but passive competitor.
As Mueller noted, aging is inevitable, but value must be earned.
He suggested owners of young sites should objectively evaluate differences between projects — not only from a search optimization perspective but also from the viewpoint of users and the market.
If a one‑year‑old site is based on modern architecture and built according to best technical practices from the start, it likely does not require major technical adjustments. In that case, growth potential may lie more in strategy and marketing than further technical optimization.
In short: domain age does not provide automatic advantage — the quality of actions taken over time determines the outcome.