A discussion on LinkedIn about LLM visibility and the tools for tracking it explored how SEOs are approaching optimization for LLM-based search. The answers provided suggest that tools for LLM-focused SEO are gaining maturity, though there is some disagreement about what exactly should be tracked.
Joe Hall (LinkedIn profile) raised a series of questions on LinkedIn about the usefulness of tools that track LLM visibility. He didn’t explicitly say that the tools lacked utility, but his questions appeared intended to open a conversation
He wrote:
“I don’t understand how these systems that claim to track LLM visibility work. LLM responses are highly subjective to context. They are not static like traditional SERPs are. Even if you could track them, how can you reasonably connect performance to business objectives? How can you do forecasting, or even build a strategy with that data? I understand the value of it from a superficial level, but it doesn’t really seem good for anything other than selling a service to consultants that don’t really know what they are doing.”
Joshua Levenson (LinkedIn profile) else answered saying that today’s SEO tools are out of date, remarking:
“People are using the old paradigm to measure a new tech.”
Joe Hall responded with “Bingo!”
Lily Ray (LinkedIn profile) responded to say that the entities that LLMs fall back on are a key element to focus on.
She explained:
“If you ask an LLM the same question thousands of times per day, you’ll be able to average the entities it mentions in its responses. And then repeat that every day. It’s not perfect but it’s something.”
Hall asked her how that’s helpful to clients and Lily answered:
“Well, there are plenty of actionable recommendations that can be gleaned from the data. But that’s obviously the hard part. It’s not as easy as “add this keyword to your title tag.”
Dixon Jones (LinkedIn profile) responded with a brief comment to introduce Waikay, which stands for What AI Knows About You. He said that his tool uses entity and topic extraction, and bases its recommendations and actions on gap analysis.
Ryan Jones (LinkedIn profile) responded to discuss how his product SERPRecon works:
“There’s 2 ways to do it. one – the way I’m doing it on SERPrecon is to use the APIs to monitor responses to the queries and then like LIly said, extract the entities, topics, etc from it. this is the cheaper/easier way but is easiest to focus on what you care about. The focus isn’t on the exact wording but the topics and themes it keeps mentioning – so you can go optimize for those.
The other way is to monitor ISP data and see how many real user queries you actually showed up for. This is super expensive.
Any other method doesn’t make much sense.”
And in another post followed up with more information:
“AI doesn’t tell you how it fanned out or what other queries it did. people keep finding clever ways in the network tab of chrome to see it, but they keep changing it just as fast.
The AI Overview tool in my tool tries to reverse engineer them using the same logic/math as their patents, but it can never be 100%.”
Then he explained how it helps clients:
“It helps us in the context of, if I enter 25 queries I want to see who IS showing up there, and what topics they’re mentioning so that I can try to make sure I’m showing up there if I’m not. That’s about it. The people measuring sentiment of the AI responses annoy the hell out of me.”
Although Hall stated that the “traditional” search results were static, in contrast to LLM-based search results, it must be pointed out that the old search results were in a constant state of change, especially after the Hummingbird update which enabled Google to add fresh search results when the query required it or when new or updated web pages were introduced to the web. Also, the traditional search results tended to have more than one intent, often as many as three, resulting in fluctuations in what’s ranking.
LLMs also show diversity in their search results but, in the case of AI Overviews, Google shows a few results that for the query and then does the “fan-out” thing to anticipate follow-up questions that naturally follow as part of discovering a topic.
Billy Peery (LinkedIn profile) offered an interesting insight into LLM search results, suggesting that the output exhibits a degree of stability and isn’t as volatile as commonly believed.
He offered this truly interesting insight:
“I guess I disagree with the idea that the SERPs were ever static.
With LLMs, we’re able to better understand which sources they’re pulling from to answer questions. So, even if the specific words change, the model’s likelihood of pulling from sources and mentioning brands is significantly more static.
I think the people who are saying that LLMs are too volatile for optimization are too focused on the exact wording, as opposed to the sources and brand mentions.”
Peery makes an excellent point by noting that some SEOs may be getting hung up on the exact keyword matching (“exact wording”) and that perhaps the more important thing to focus on is whether the LLM is linking to and mentioning specific websites and brands.
Awareness of LLM tools for tracking visibility is growing. Marketers are reaching some agreement on what should be tracked and how it benefits clients. While some question the strategic value of these tools, others use them to identify which brands and themes are mentioned, adding that data to their SEO mix.
Featured Image by Shutterstock/TierneyMJ
Below are the cable news ratings for the second quarter of 2025.
The threat of widespread conflict in the Middle East and the U.S. involvement were the main news stories of the second quarter.
Despite these bubbling tensions, the second-quarter cable news ratings were not as impressive as those in the first quarter, as all three cable news networks experienced declines in total viewers and the Adult 25-54 demo during primetime.
MSNBC was the lone bright spot for the quarter as it was the only news network not to lose viewers in either of the measured categories during total day. Rachel Maddow‘s return to her once-a-week schedule did, however, hurt the network’s primetime momentum.
Fox News continued its streak as cable news’ most-watched network and also surged past a couple of the broadcast networks, ABC (2.977 million) and NBC (2.704 million), in primetime in total viewers. This was the network’s second-highest-rated second quarter in network history with weekday total day viewers, trailing its coverage of the COVID-19 pandemic in 2020.
Fox News Channel
Fox News averaged 2.633 million total primetime viewers and 304,000 Adults 25-54 viewers in Q2 2025. During total day, Fox News had 1.632 million total viewers and 202,000 demo viewers.
Compared to the first quarter, Fox News was down -13% in total viewers and -20 in the demo during primetime. When looking at total day, it was down by -15% and -18% in total viewers and the demo, respectively.
When comparing its performance in the same quarter of 2024, it saw a +25% increase in total viewers and a +34% increase in the demo during primetime. Looking at total day, the network was up +25% and +31% in total viewers and the demo, respectively. Fox News was the only network with year-over-year same-quarter growth.
Fox News’ domination at the top of cable news has now stretched to 94 consecutive quarters, according to Nielsen Media Research.
Fox News was the most-watched cable network during primetime with total viewers and was the third most-watched network in the demo. During total day, Fox News was again on top in total viewers and finished in second place in the demo.
MSNBC
During Q2 2025, MSNBC averaged 1.008 million total primetime viewers and 91,000 primetime demo viewers. During total day, MSNBC had 596,000 total viewers and 57,000 A25-54 viewers.
When looking at MSNBC’s performance vs. the first quarter in 2025, it was down in total viewers and the demo by -2% and -5%, respectively, during primetime. However, during total day, the network was up +1% in total viewers and was flat in the demo—the only network not to lose viewers during this daypart.
When compared to the second quarter of 2024, MSNBC was down -15% in total viewers and -20% in the A25-54 demo during primetime. It was also down -26% and -31% in the demo during total day.
MSNBC was the fourth most-watched cable news network in total viewers and No. 15 in the demo. During total day, MSNBC was in second place in total viewers and was 11th in the demo.
CNN
CNN averaged 538,000 total primetime viewers and 105,000 primetime demo viewers in Q2 2025. During total day, the network had 406,000 total viewers and 71,000 A25-54 viewers.
Compared to the first quarter, CNN was down -4% and -13% in total viewers and the demo, respectively, in primetime. When looking at how it performed during total day, it was down -5% in total viewers and -10% in the demo.
The network experienced a -13% decline in total viewers and a -15% decline in A25-54 during primetime compared to the second quarter of 2024. During total day, it had losses of -14% and -16% in total viewers and the demo, respectively.
CNN finished in 6th place during primetime with total viewers and tied for 9th in the demo with HGTV. During total day, CNN was No.5 in total viewers and landed in sixth place in the demo.
PROGRAMMING
Among Total Viewers
Among Adults 25-54
2025 Q2 Cable News Ratings (Nielsen Live+SD data):
PRIMETIME Fox News MSNBC CNN • Total Viewers: 2,663,000 1,008,000 538,000 • A25-54: 304,000 91,000 105,000 TOTAL DAY Fox News MSNBC CNN • Total Viewers: 1,632,000 596,000 406,000 • A25-54: 202,000 57,000 71,000
Q2’25 Cable Ranker by Adweek on Scribd
A story of a female footballer, as well as an outline of the hidden history of the women’s game, the inspiration for her debut graphic novel was sparked a few years ago when Anna was thinking about the 1921 ban on women playing on FA-affiliated pitches. “I started looking closely at team photos from that time and found them incredibly moving: the keen faces, muddy knees, and striped shirts of the players. I wondered what happened to all those women who had so many possibilities opened up for them, only to be shut down.”
A name that kept coming up on old women’s team sheets Anna uncovered in her research was Florrie, so the illustrator decided to create a fictional character with the title to create an adventure that would touch on both real and fictional events rooted in the period: “huge crowds at matches in London and Preston, international fixtures, dances at lesbian club Le Monocle in Paris and the devastating consequences of a ban on women playing a game deemed ‘unsuitable’ for women.”
The novel follows Florrie’s great-niece who discovers she was secretly a footballing legend in the early 20th century, and unearths Florrie’s hidden history “both on and off the pitch” – a narrative that “has some overlap with my experience of playing football, and with that of many players I know”, Anna tells us. The illustrator slowly formed the visual world of the book from a host of old photographs of even the smallest historical details to inform her drawing, like “berets the 1920s Parisian football players wore to the rides running at Blackpool Pleasure Beach in 1921”, she says.
An ode to unforgettable women in the sport, and a beautiful queer love story, Anna’s only hope for her debut graphic novel is that its tale touches readers with “the joy of playing football, the feeling of first love, and the discovery and celebration of a past that should be better known”.
A new study analyzing 10,000 keywords reveals that Google’s AI Mode delivers inconsistent results.
The research also shows minimal overlap between AI Mode sources and traditional organic search rankings.
Published by SE Ranking, the study examines how AI Mode performs in comparison to Google’s AI Overviews and the top 10 organic search results.
“The average overlap of exact URLs between the three datasets was just 9.2%,” the study notes, illustrating the volatility.
To test consistency, researchers ran the same 10,000 keywords through AI Mode three times on the same day. The results varied most of the time.
In 21.2% of cases, there were no overlapping URLs at all between the three sets of responses.
Domain-level consistency was slightly higher, at 14.7%, indicating AI Mode may cite different pages from the same websites.
Only 14% of URLs in AI Mode responses matched the top 10 organic search results for the same queries. When looking at domain-level matches, overlap increased to 21.9%.
In 17.9% of queries, AI Mode provided zero overlap with organic URLs, suggesting its selections could be independent of Google’s ranking algorithms.
On average, each AI Mode response contains 12.6 citations.
The most common format is block links (90.8%), followed by in-text links (8.9%) and AIM SERP-style links (0.3%), which resemble traditional search engine results pages (SERPs).
Despite the volatility, some domains consistently appeared across all tests. The top-cited sites were:
Google properties were cited most frequently, accounting for 5.7% of all links. These were mostly Google Maps business profiles.
Comparing AI Mode to AI Overviews, researchers found an average URL overlap of just 10.7%, with domain overlap at 16%.
This suggests the two systems operate under different logic despite both being AI-driven.
The high volatility of AI Mode results presents new challenges and new opportunities.
Because results can vary even for identical queries, tracking visibility is more complex.
However, this fluidity also creates more openings for exposure. Unlike traditional search results, where a small set of top-ranking pages often dominate, AI Mode appears to refresh its citations frequently.
That means publishers with relevant, high-quality content may have a better chance of appearing in AI Mode answers, even if they’re not in the organic top 10.
To adapt to this environment, SEOs and content creators should consider:
For more, see the full study from SE Ranking.
Featured Image: Roman Samborskyi/Shutterstock
You’ve probably noticed marketing is becoming increasingly complex. It requires diverse skill sets and close coordination with colleagues within and beyond the marketing team.
But even in this cross-functional world, many teams struggle with internal silos — departments or individuals operating in isolation, hindering efficiency, consistency and results.
Let’s explore six silos we regularly see in marketing and revenue teams. Because it’s hard to break down silos (otherwise, we wouldn’t be talking about them so often) we’re, we’ll also provide actionable strategies for fostering collaboration and tips for getting crucial buy-in.
In other words, we’re going to develop a plan for breaking down each of the six silos.
Marketing teams often plan and execute campaigns in isolation, without much input from teams like sales, product or customer success. This results in campaigns that fail to align with sales goals, product launches or customer needs.
From the beginning, you need to involve key stakeholders from sales, product development and customer experience (CX) in the campaign planning process.
Start with one upcoming, high-priority campaign. Instead of the usual planning meeting, invite a representative from sales leadership, a product manager and a customer success manager to a dedicated “Campaign Kick-off and Alignment Workshop.”
Before the workshop, circulate a clear agenda and initial campaign brief (even if it’s a rough draft). Ask attendees to come prepared with their team’s priorities, challenges and insights related to the campaign’s theme.
Use the workshop to discuss target audience, key messages, sales enablement needs, product features to highlight and potential customer pain points.
For your sales leadership, emphasize how early involvement ensures campaigns generate qualified leads that sales can close, reducing wasted effort on both sides. Highlight the opportunity to shape messaging that truly resonates with prospects.
Stress to the product team that their input guarantees campaigns accurately represent product features and benefits, preventing miscommunication and setting correct customer expectations. It’s also a chance to gather early market feedback.
Explain to CX and customer success how their participation ensures campaigns address common customer questions or pain points, leading to a smoother post-purchase experience and reducing support tickets.
Dig deeper: 7 vanity metrics marketers should avoid, and 7 to replace them
Marketing often takes sole ownership of website content. As a result, you miss valuable insights and resources from subject matter experts in areas like product, engineering, customer service and legal. This lack of diversity in topics and voices can lead to generic, less authoritative content or missed opportunities to improve SEO and distribute thought leadership.
Instead of leaving website content solely to marketing, empower and enable experts across the organization to contribute content. Marketing’s role is to provide strategic oversight, editing and distribution of that content.
Begin by identifying your experts. Find the individuals or teams in your organization with deep knowledge on topics relevant to your audience that marketing might not cover in detail.
Next, develop clear, simple guidelines for content submission (e.g., topic suggestions, outline templates, desired tone, etc.).
Then work with one enthusiastic SME to co-create a high-value piece like a technical blog post or an industry trends article. Marketing handles the editing, SEO optimization and promotion.
Be sure to showcase the SME’s contribution and its positive impact, such as high engagement, traffic or expert credibility.
Subject matter experts and department heads should use content contributions to build personal and departmental thought leadership, enhance the company’s reputation, and directly contribute to business goals. Offer marketing support to minimize their time investment.
For leadership, demonstrate how this approach scales content creation, leverages internal expertise for more authoritative content and will help improve SEO and brand perception. Show them examples of how competitors feature internal experts on their websites.
Large marketing organizations include teams such as demand gen, brand, content and operations. Each team often tracks and reports on its isolated metrics, optimizing for different outcomes. This makes it challenging to understand overall marketing effectiveness or attribute success across the customer journey.
Develop unified dashboards that present key performance indicators (KPIs) across the entire marketing funnel, showcasing how different activities contribute to shared business objectives.
Start the transition by gathering leaders from various marketing functions and even sales and CX to define what a qualified lead means, what customer acquisition cost truly entails, and the key milestones in the customer journey.
Agree on three to five high-level, business-oriented KPIs everyone contributes to (examples include marketing-sourced pipeline, customer lifetime value, cost per MQL).
Then build a prototype dashboard using an existing BI tool or a spreadsheet, pulling data from various sources to visualize the agreed-upon KPIs.
Schedule recurring meetings to review the shared dashboard, fostering discussions about how each team’s efforts impact the collective goal.
Explain to individual team leads how shared metrics provide a clearer understanding of their team’s impact on overall business success, justifying their budget and resources more effectively. It also helps identify bottlenecks that might not be visible from their siloed view.
For marketing leadership, focus on improved transparency, better resource allocation and a unified view of marketing’s contribution to revenue, moving beyond individual campaign metrics to strategic business impact.
Dig deeper: The anatomy of marketing funnel automation
Some marketing operations teams manage marketing technology in isolation, without sufficient input from other marketing teams, IT, sales or revenue operations. That leads to underutilized features, integration issues or a tech stack that doesn’t fully support the needs of other teams.
Establish a cross-functional committee or regular forum to review the current martech stack, identify new needs and collaboratively plan future investments and integrations.
Start by soliciting input from all key users of martech about their current pain points, desired functionalities and integration needs.
Next, form a small working group with representatives from marketing ops, IT, marketing and sales or revenue ops. Meet regularly to review current tools, discuss issues and prioritize upcoming projects.
Then develop a visual roadmap for martech improvements, upgrades and new tool evaluations, ensuring everyone understands the plan and their role in it.
For your marketing team, highlight how their direct input will lead to tools that better serve their needs, reduce manual work and improve campaign performance. Frame the conversation as, “your chance to shape the tools you use.”
For the IT team, you’ll want to emphasize how early collaboration reduces shadow IT, improves data security, streamlines integrations and avoids last-minute fire drills. Show how a proactive approach saves them time and resources in the long run.
When talking to sales operations or revenue operations, point out how better integrated and optimized marketing tools lead to cleaner sales data, improved lead routing and more effective sales enablement assets.
The various teams in your organization collect, store and interpret data in different, inconsistent ways. Your marketing team might use one definition for a lead, while sales uses another. Customer service data might not integrate with marketing data, creating gaps in the view of the customer. The result is conflicting reports and inefficient handoffs.
Establishing common definitions for key marketing and sales terms (e.g., MQL, SQL and opportunity) will help you work toward a single source of truth for customer data, often relying on CRM and marketing automation integration.
Start by creating a simple, shared glossary of common marketing and sales terms with agreed-upon definitions. Distribute it widely and review it regularly.
Next, conduct a high-level audit of data fields in your CRM, marketing automation platform and other key systems to identify inconsistencies and redundancies.
Then prioritize one critical integration (e.g., lead qualification data from marketing automation to CRM) to ensure consistency in lead definitions and scoring.
In the future, implement routine checks for data hygiene and completeness, with assigned ownership for different data sets.
Highlight for your marketing and sales teams how consistent data leads to more accurate reporting, better targeting, smoother handoffs and ultimately — the thing everyone wants — more revenue.
Emphasize to your leadership how a single source of truth enables more reliable forecasting, better strategic decisions, and a holistic view of the customer journey, reducing data-related conflicts and wasted time.
Your data analytics teams should recognize how this streamlines their work. They will no longer have to constantly reconcile disparate data sets, allowing them to focus on deeper insights.
One of the long-standing silos in business silos is the relationship between marketing and sales, specifically the part where marketing throws leads over the fence to sales without clear communication, agreed-upon Service Level Agreements (SLAs) or feedback loops. The result is lost leads, frustration and a blame game.
You need to formally define the responsibilities of both marketing and sales at each stage of the lead lifecycle, and establish clear expectations for lead quality, follow-up times and feedback mechanisms.
Begin with a joint SLA workshop that brings marketing and sales leadership together to define each of the following:
Next, ensure your CRM and marketing automation platforms are configured to support the agreed-upon handoff process, including automated notifications and status updates.
Schedule weekly or bi-weekly meetings between marketing and sales managers to review pipeline, discuss lead quality and address any handoff issues.
Show your sales leaders how clear SLAs lead to higher quality leads that are acted upon faster, increasing conversion rates and sales efficiency. Emphasize the gains from reduced time wasted on unqualified leads.
Explain to your marketing leaders how a formal handoff process improves accountability, reduces the number of lost leads, and provides valuable feedback to optimize marketing efforts and prove ROI.
Individual team members in both sales and marketing need to see how clear expectations reduce frustration, improve communication and help everyone focus on their respective strengths to achieve shared revenue goals.
Dig deeper: CMOs, CEOs and marketers are all struggling with martech data issues
MarTech is owned by Semrush. We remain committed to providing high-quality coverage of marketing topics. Unless otherwise noted, this page’s content was written by either an employee or a paid contractor of Semrush Inc.
The Nothing Phone (3) has been announced at a special reveal event in London, and once again, the UK-based design-first tech brand is pushing against the grain. The third-generation smartphone from Carl Pei’s designer brand takes what worked before and evolves it, in form, function, and philosophy, with a focus on creativity, intelligence, and pure design joy.
Now, it’s no secret I like the Nothing brand and its approach. I reviewed the Nothing Phone (2) after buying it, and have been using it, on and off, with Nothing Phone 2a and 3a, ever since. The new Nothing Headphone (1) are a new obsession too. But why should you care as much as I do about the brand’s new, and arguably first, true flagship smartphone?
You may like
(Image credit: Future)
(Image credit: Nothing)
(Image credit: Nothing)
(Image credit: Future / Nothing)
(Image credit: Future / Nothing)
Swipe to scroll horizontallyNothing Phone (3) prices
12 GB + 256 GB
£799 / $799 / €799
16 GB + 512 GB
£899 / $899 / €899
Row 2 – Cell 0 Row 2 – Cell 1
Today’s Memo is a full refresh of one of the most important frameworks I use with clients – and one I’ve updated heavily based on how AI is reshaping search behavior…
…I’m talking about the keyword universe. 🪐
In this issue, I’m digging into:
Initiating liftoff … we’re heading into search space. 🧑🚀🛸
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
A single keyword no longer represents a single intent or SERP outcome. In today’s AI-driven search landscape, we need scalable structures that map and evolve with intent … not just “rank.”
Therefore, the classic approach to keyword research is outdated.
In fact, despite all the boy-who-cried-wolf “SEO is dead!” claims across the web, I’d argue that keyword-based SEO is actually dead, which I wrote about in Death of the Keyword.
And it has been for a while.
But the SEO keyword universe is not. And I’ll explain why.
A keyword universe is a big pool of language your target audience uses when they search that will help them find you.
It surfaces the most important queries and phrases (i.e., keywords) at the top and lives in a spreadsheet or database, like BigQuery.
Instead of hyperfocusing on specific keywords or doing a keyword sprint every so often, you need to build a keyword universe that you’ll explore and conquer across your site over time.
One problem I tried to solve with the keyword universe is that keyword and intent research is often static.
It happens maybe every month or quarter, and it’s very manual. A keyword universe is both static and dynamic. While that might sound counterintuitive, here’s what I mean:
The keyword universe is like a pool that you can fill with water whenever you want. You can update it daily, monthly, quarter – whenever. It always surfaces the most important intents at the top.
For the majority of brands, some keyword-universe-building tasks only need to be done once (or once on product/service launch), while other tasks might be ongoing. More on this below.
Within your database, you’ll assign weighted scores to prioritize content creation, but that scoring system might shift over time based on changes in initiatives, product/feature launches, and discovering topics with high conversion rates.
Image Credit: Kevin Indig
The goal in building your keyword universe is to create a keyword pipeline for content creation – one that you prioritize by business impact.
Keyword universes elevate the most impactful topics to the top of a list, which allows you to focus on planning capacity, like:
Image Credit: Kevin Indig
A big problem in SEO is knowing which keywords convert to customers before targeting them.
One big advantage of the keyword universe (compared to research sprints) is that new keywords automatically fall into a natural prioritization.
And with the advent of AI in search, like AI Overviews/Google’s AI Mode, this is more important than ever.
The keyword universe mitigates that problem through a clever sorting system.
SEO pros can continuously research and launch new keywords into the universe, while writers can pick keywords off the list at any time.
Think fluid collaboration.
Image Credit: Kevin Indig
Keyword universes are mostly relevant for companies that have to create content themselves instead of leaning on users or products. I call them integrators.
Typical integrator culprits are SaaS, DTC, or publishing businesses, which often have no predetermined, product-led SEO structure for keyword prioritization.
The opposite is aggregators, which scale organic traffic through user-generated content (UGC) or product inventory. (Examples include sites like TripAdvisor, Uber Eats, TikTok, and Yelp.)
The keyword path for aggregators is defined by their page types. And the target topics come out of the product.
Yelp, for example, knows that “near me keywords” and query patterns like “{business} in {city}” are important because that’s the main use case for their local listing pages.
Integrators don’t have that luxury. They need to use other signals to prioritize keywords for business impact.
Creating your keyword universe is a three-step process.
And I’ll bet it’s likely you have old spreadsheets of keywords littered throughout your shared drives, collecting dust.
Guess what? You can add them to this process and make good use of them, too. (Finally.)
Keyword mining is the science of building a large list of keywords and a bread-and-butter workflow in SEO.
The classic way is to use a list of seed keywords and throw them into third-party rank trackers (like Semrush or Ahrefs) to get related terms and other suggestions.
That’s a good start, but that’s what your competitors are doing too.
You need to look for fresh ideas that are unique to your brand – data that no one else has…
…so start with customer conversations.
Dig into:
And then extract key phrasing, questions, and terms your audience actually uses.
But don’t ignore other valuable sources of keyword ideas:
Semrush’s list of paid keywords a site bids on (Image Credit: Kevin Indig)
The goal of the first step is to grow our universe with as many keywords as we can find.
(Don’t obsess over relevance. That’s Step 2.)
During this phase, there are some keyword universe research tasks that will be one-time-only, and some that will likely need refreshing or repeating over time.
Here’s a quick list to distinguish between repeat and one-time tasks:
Step 2, sorting the long list of mined queries, is the linchpin of keyword universes.
If you get this right, you’ll be installing a powerful SEO prioritization system for your company.
Getting it wrong is just wasting time.
Anyone can create a large list of keywords, but creating strong filters and sorting mechanisms is hard.
The old school way to go about prioritization is by search volume.
Throw that classic view out the window: We can do better than that.
Most times, keywords with higher search volume actually convert less well – or get no real traffic at all due to AIOs.
As I mentioned in Death of the Keyword:
A couple of months ago, I rewrote my guide to inhouse SEO and started ranking in position one. But the joke was on me. I didn’t get a single dirty click for that keyword. Over 200 people search for “in house seo” but not a single person clicks on a search result.
By the way, Google Analytics only shows 10 clicks from organic search over the last 3 months. So, what’s going on? The 10 clicks I actually got are not reported in GSC (privacy… I guess?), but the majority of searchers likely click on one of the People Also Asked features that show up right below my search result.
Keeping that in mind about search volume, since we don’t know which keywords are most important for the business before targeting them – and we don’t want to make decisions by volume alone – we need sorting parameters based on strong signals.
We can summarize several signals for each keyword and sort the list by total score.
That’s exactly what I’ve done with clients like Ramp, the fastest-growing fintech startup in history, to prioritize content strategy.
Image Credit: Kevin Indig
Sorting is about defining an initial set of signals and then refining it with feedback.
You’ll start by giving each signal a weight based on our best guess – and then refine it over time.
When you build your keyword universe, you’ll want to define an automated logic (say, in Google Sheets or BigQuery).
Your logic could be a simple “if this then that,” like “if keyword is mentioned by customer, assign 10 points.”
Potential signals (not all need to be used):
You should give each signal a weight from 0-10 or 0-3, with the highest number being strongest and zero being weakest.
Your scoring will be unique to you based on business goals.
Let’s pause here for a moment: I created a simple tool that will make this work way easier, saving a lot of time and trial + error. (It’s below!) Premium subscribers get full access to tools like this one, along with additional content and deep dives.
But let’s say you’re prioritizing building content around essential topics and have goals set around growing topical authority. And let’s say you’re using the 0-10 scale. Your scoring might look something like:
The sum of all scores for each query in your universe then determines the priority sorting of the list.
Keywords with the highest total score land at the top and vice versa.
New keywords on the list fall into a natural prioritization.
Important note: If your research shows that sales are connected to queries related to current events, news, updates in research reports, etc., those should be addressed as soon as possible.
(Example: If your company sells home solar batteries and recent weather news increases demand due to a specific weather event, make sure to prioritize that in your universe ASAP.)
Amanda’s thoughts: I might get some hate for this stance, but if you’re a new brand or site just beginning to build a content library and you fall into the integrator category, focus on building trust first by securing visibility in organic search results where you can as quickly as you can.
I know, I know: What about conversions? Conversion-focused content is crucial to the long-term success of the org.
But to set yourself apart, you need to actually create the content that no one is making about the questions, pain points, and specific needs your target audience is voicing.
If your sales team repeatedly hears a version of the same question, it’s likely there’s no easy-to-find answer to the question – or the current answers out there aren’t trustworthy. Trust is the most important currency in the era of AI-based search. Start building it ASAP. Conversions will follow.
Models get good by improving over time.
Like a large language model that learns from fine-tuning, we need to adjust our signal weighting based on the results we see.
We can go about fine-tuning in two ways:
1. Anecdotally, conversions should increase as we build new content (or update existing content) based on the keyword universe prioritization scoring.
Otherwise, sorting signals have the wrong weight, and we need to adjust.
2. Another way to test the system is a snapshot analysis.
To do so, you’ll run a comparison of two sets of data: the keywords that attract the most organic visibility and the pages that drive the most conversions, side-by-side with the keywords at the top of the universe.
Ideally, they overlap. If they don’t, aim to adjust your sorting signals until they come close.
Look, there’s no point in doing all this work unless you’re going to maintain the hygiene of this data over time.
This is what you need to keep in mind:
1. Once you’ve created a page that targets a keyword in your list, move it to a second tab on the spreadsheet or another table in the database.
That way, you don’t lose track and end up with writers creating duplicate content.
2. Build custom click curves for each page type (blog article, landing page, calculator, etc.) when including traffic and revenue projections.
Assign each step in the conversion funnel a conversion rate – like visit ➡️newsletter sign-up, visit ➡️demo, visit ➡️purchase – and multiply search volume with an estimated position on the custom click curve, conversion rates, and lifetime value. (Fine-tune regularly.)
Here’s an example: MSV * CTR (pos 1) * CVRs * Lifetime value = Revenue prediction
3. GPT for Sheets or the Meaning Cloud extension for Google Sheets can speed up assigning each keyword to a topic.
Meaning Cloud allows us to easily train an LLM by uploading a spreadsheet with a few tagged keywords.
GPT for Sheets connects Google Sheets with the OpenAI API so we can give prompts like “Which of the following topics would this keyword best fit? Category 1, category 2, category 3, etc.”
LLMs like Chat GPT, Claude, or Gemini have become good enough that you can easily use them to assign topics as well. Just prompt for consistency!
4. Categorize the keywords by intent, and then group or sort your sheet by intent. Check out Query Fan Out to learn why.
5. Don’t build too granular and expansive of a keyword universe that you can’t activate it.
If you have a team of in-house strategists and three part-time freelancers, expecting a 3,000 keyword universe to feel doable and attainable is … an unmet expectation.
The old way of doing SEO – chasing high-volume keywords and hoping for conversions – isn’t built for today’s search reality.
Trust is hard to earn. (And traffic is hard to come by.)
The keyword universe gives you a living, breathing SEO operating system. One that can evolve based on your custom scoring and prioritization.
Prioritizing what’s important (sorting) allows us to literally filter through the noise (distractions, offers, shiny objects) and bring us to where we want to be.
So, start with your old keyword docs. (Or toss them out if they’re irrelevant, aged poorly, or simply hyper-focused on volume.)
Then, dig into what your customers are really asking. Build smart signals. Assign weights. And refine as you go.
This isn’t about perfection. It’s about building a system that actually works for you.
And speaking of building a system…
For premium Growth Memo subscribers, we’ve got a tool that will help save you time and score queries by unique priority weights that you set.
Image Credit: Kevin Indig
More Resources:
Featured Image: Paulo Bobita/Search Engine Journal
Advertisers aren’t feeling the direct blow of generative AI on traffic the way publishers are. But they see what’s coming. Tripadvisor, for one, is already adjusting its strategy as the foundation of search starts to shift.
It’s not going full Dotdash Meredith – the publisher has openly braced for a future without Google traffic – but a recalibration is clearly underway.
“Google’s AI mode in search is going to eat large chunks of search,” said Matthew Dacey, CMO at Tripadvisor. “It’s going to happen fast as they [Google] push it and more people subsequently adopt it.”
So far that change hasn’t landed with full force on the travel firm. According to Statisita, monthly visits hovered between 146 million and 169 million in early 2023. By February 2025, that number had dropped to around 120 million.
Yes, some of that dip chimes with the rollout of AI Overviews – Google’s move to surface answers directly in search results. But the story behind Tripadvisor’s traffic shift is bigger than any one feature. In fact, it’s part of a longer arc: the way people find information is changing. Tripadvisor, like everyone else, is learning how to meet them there.
For Dacey, that starts with repositioning Tripadvisor not just as a utility but as a daily habit – something people turn to regularly, whether they’re browsing, planning or already en route to a holiday. Less search, more morning ritual.
Delivering on that vision comes down to three things: improving the app experience, refreshing the membership program and moving Tripadvisor higher up the funnel – turning the service into a starting point, not just a step along the way.
If it works, the hope is that it will drive more logged-in behavior, particularly on the app.
“Right now we have over 100 million active member accounts but not many of them are using the app,” said Dacey.
That’s where AI comes in. Tripadvisor is building features to anticipate what travelers need before they ask, using the details they share when they begin planning or booking a trip. With that context, the app can push personalized recommendations at just the right moment.
Or as Dacey put it: “All of our push notifications that go out can be those questions that travellers might be asking and served to them using AI based on the user-generated content on our site. It’s contextual information for exactly what’s on someone’s mind.”
A new brand campaign is meant to help land that message. Launching this month, it will mark Tripadvisor’s 25th anniversary while cementing the brand as a direct planning destination – not just a Google link to the results.
“The way I’d describe what we’re doing is two things: how do we try and be more direct with people and then on the other hand it’s about how we show up where people actually are,” he continued.
One of those places is Perplexity. Tripadvisor’s partnership with the AI search startup, announced earlier this year, lets Perplexity tap into behavioral and preference data that traditional search engines typically can’t access. In return, Tripadvisor’s curated hotel lists appear within Perplexity’s summaries.
Six months in, Dacey said the results are promising – though he didn’t share details. Tripadvisor has said the deal is bringing in more high-intent users, particularly those ready to book.
Eventually, this partnership will expand to include restaurants and experiences. The playbook: generate revenue, drive qualified traffic and carve out a presence in AI-powered discovery environments.
“What tends to work in these environments is longer queries so we’re in a good position at Tripadvisor because longer queries for specific use cases will always find their way to interesting content,” said Dacey. “The question for us then is how do we get credit for that and how we translate that influence into something tangible on our side.”
That optimism points to a larger dynamic. Brands like Tripadvisor may be better positioned than publishers in the AI era. For one, they’re not monetizing pageviews the same way. They can afford to lose some traffic without losing revenue. And unlike publishers, they can shift ad dollars – moving spend into performance or partnership to make up for what’s lost in search.
That’s not to say CMOs like Dacey aren’t concerned. They are. But they also have more options.
“Of course, advertisers are going to feel second-order effects of traffic from search going down like CPCs going up but it’s not comparable to publishers who view that traffic as their lifeblood,” said Tim Hussain, co-founder of AI consulting firm Signal42. “A CMO can always reallocate that money into other channels and escape it – well to a point.”
“We knew things were going to be a lot more difficult when we moved to Somerset,” Ryan says. “However, we thought we’d done a thorough job in weighing up the pros and cons before we moved – career compromises, cost of living, cultural trade-offs. But what caught us off guard were the less obvious provisions and hidden infrastructures we hadn’t realised we relied on in London. It was the lack of creative recruitment agencies or Facebook groups for set designers to sell and share props. It was the reduced word-of-mouth opportunities, or the lack of apps, magazines and influencers keeping you up to date with every exhibition, opening or event. It was the accumulation of these smaller, more invisible gaps that felt the hardest to acclimate to. Only after we moved did we realise how essential those things had been to our creative practices.”
Setting up Makers’ Yard helped in this process a lot. “We didn’t know anyone when we moved to Frome, but the building quickly became a hub for local creatives. The conversations, connections and collaborations that have unfolded in the space have been personally and professionally significant for us. Without those impromptu introductions and serendipitous chats, the transition would have been much harder.” Makers’ Yard became a testament to the power of a concentrated community space – but not everyone can take on a Victorian warehouse renovation. Soon, Ryan saw fellow city-ditchers grapple with the same issues they had faced, whether it was about jobs, events or selling kit. That’s how Ryan and Emma spotted an opportunity for a digital version of what they had created with Makers’ Yard – an alternative type of social network, one that was hyperlocal to Frome and nearby towns, and dedicated to the creative industry there.
The result is M.Y Local Network, a localised social network for creatives. For a £25 annual subscription fee, members gain access to a closed Discord community for Frome and its surrounding area (currently at around 150 members, including organisations and individuals). From there, you’ll find specific groups serving different needs: a jobs board, a professional sell-and-swap, peer advice and recommendations, local events, workplace listings, a library of member-recommended digital tools and online resources (podcasts, funding resources, etc.), as well as a collective Google map.
Google long ago filed a patent for ranking search results by trust. The groundbreaking idea behind the patent is that user behavior can be used as a starting point for developing a ranking signal.
The big idea behind the patent is that the Internet is full of websites all linking to and commenting about each other. But which sites are trustworthy? Google’s solution is to utilize user behavior to indicate which sites are trusted and then use the linking and content on those sites to reveal more sites that are trustworthy for any given topic.
PageRank is basically the same thing only it begins and ends with one website linking to another website. The innovation of Google’s trust ranking patent is to put the user at the start of that trust chain like this:
User trusts X Websites > X Websites trust Other Sites > This feeds into Google as a ranking signal
The trust originates from the user and flows to trust sites that themselves provide anchor text, lists of other sites and commentary about other sites.
That, in a nutshell, is what Google’s trust-based ranking algorithm is about.
The deeper insight is that it reveals Google’s groundbreaking approach to letting users be a signal of what’s trustworthy. You know how Google keeps saying to create websites for users? This is what the trust patent is all about, putting the user in the front seat of the ranking algorithm.
The patent was coincidentally filed around the same period that Yahoo and Stanford University published a Trust Rank research paper which is focused on identifying spam pages.
Google’s patent is not about finding spam. It’s focused on doing the opposite, identifying trustworthy web pages that satisfy the user’s intent for a search query.
The first part of any patent consists of an Abstract section that offers a very general description of the invention that that’s what this patent does as well.
The patent abstract asserts:
Here’s what the Abstract says:
“A search engine system provides search results that are ranked according to a measure of the trust associated with entities that have provided labels for the documents in the search results.
A search engine receives a query and selects documents relevant to the query.
The search engine also determines labels associated with selected documents, and the trust ranks of the entities that provided the labels.
The trust ranks are used to determine trust factors for the respective documents. The trust factors are used to adjust information retrieval scores of the documents. The search results are then ranked based on the adjusted information retrieval scores.”
As you can see, the Abstract does not say who the “entities” are nor does it say what the labels are yet, but it will.
The next part is called the Field Of The Invention. The purpose is to describe the technical domain of the invention (which is information retrieval) and the focus (trust relationships between users) for the purpose of ranking web pages.
Here’s what it says:
“The present invention relates to search engines, and more specifically to search engines that use information indicative of trust relationship between users to rank search results.”
Now we move on to the next section, the Background, which describes the problem this invention solves.
This section describes why search engines fall short of answering user queries (the problem) and why the invention solves the problem.
The main problems described are:
This is how the patent explains it:
“An inherent problem in the design of search engines is that the relevance of search results to a particular user depends on factors that are highly dependent on the user’s intent in conducting the search—that is why they are conducting the search—as well as the user’s circumstances, the facts pertaining to the user’s information need.
Thus, given the same query by two different users, a given set of search results can be relevant to one user and irrelevant to another, entirely because of the different intent and information needs.”
Next it goes on to explain that users trust certain websites that provide information about certain topics:
“…In part because of the inability of contemporary search engines to consistently find information that satisfies the user’s information need, and not merely the user’s query terms, users frequently turn to websites that offer additional analysis or understanding of content available on the Internet.”
The rest of the Background section names forums, review sites, blogs, and news websites as places that users turn to for their information needs, calling them vertical knowledge sites. Vertical Knowledge sites, it’s explained later, can be any kind of website.
The patent explains that trust is why users turn to those sites:
“This degree of trust is valuable to users as a way of evaluating the often bewildering array of information that is available on the Internet.”
To recap, the “Background” section explains that the trust relationships between users and entities like forums, review sites, and blogs can be used to influence the ranking of search results. As we go deeper into the patent we’ll see that the entities are not limited to the above kinds of sites, they can be any kind of site.
This part of the patent is interesting because it brings together all of the concepts into one place, but in a general high-level manner, and throws in some legal paragraphs that explain that the patent can apply to a wider scope than is set out in the patent.
The Summary section appears to have four sections:
Here’s a nutshell explanation of how the system works:
Here’s an abbreviated version of the third part of the Summary that gives an idea of the inner workings of the invention:
“A user provides a query to the system…The system retrieves a set of search results… The system determines which query labels are applicable to which of the search result documents. … determines for each document an overall trust factor to apply… adjusts the …retrieval score… and reranks the results.”
Here’s that same section in its entirety:
The above is a general description of the invention.
The next section, called Detailed Description, deep dives into the details. At this point it’s becoming increasingly evident that the patent is highly nuanced and can not be reduced to simple advice similar to: “optimize your site like this to earn trust.”
A large part of the patent hinges on a trust button and an advanced search query: label:
Neither the trust button or the label advanced search query have ever existed. As you’ll see, they are quite probably stand-ins for techniques that Google doesn’t want to explicitly reveal.
The details of this patent are located in four sections within the Detailed Description section of the patent. This patent is not as simple as 99% of SEOs say it is.
These are the four sections:
The System Overview is where the patent deep dives into the specifics. The following is an overview to make it easy to understand.
1. Explains how the invention (a search engine system) ranks search results based on trust relationships between users and the user-trusted entities who label web content.
2. The patent describes a “trust button” that a user can click that tells Google that a user trusts a website or trusts the website for a specific topic or topics.
3. The patent says a trust related score is assigned to a website when a user clicks a trust button on a website.
4. The trust button information is stored in a trust database that’s referred to as #190.
Here’s what it says about assigning a trust rank score based on the trust button:
“The trust information provided by the users with respect to others is used to determine a trust rank for each user, which is measure of the overall degree of trust that users have in the particular entity.”
The patent refers to the “trust rank” of the user-trusted websites. That trust rank is based on a trust button that a user clicks to indicate that they trust a given website, assigning a trust rank score.
The patent says:
“…the user can click on a “trust button” on a web page belonging to the entity, which causes a corresponding record for a trust relationship to be recorded in the trust database 190.
In general any type of input from the user indicating that such as trust relationship exists can be used.”
The trust button has never existed and the patent quietly acknowledges this by stating that any type of input can be used to indicate the trust relationship.
So what is it? I believe that the “trust button” is a stand-in for user behavior metrics in general, and site visitor data in particular. The patent Claims section does not mention trust buttons at all but does mention user visitor data as an indicator of trust.
Here are several passages that mention site visits as a way to understand if a user trusts a website:
“The system can also examine web visitation patterns of the user and can infer from the web visitation patterns which entities the user trusts. For example, the system can infer that a particular user trust a particular entity when the user visits the entity’s web page with a certain frequency.”
The same thing is stated in the Claims section of the patent, it’s the very first claim they make for the invention:
“A method performed by data processing apparatus, the method comprising:
determining, based on web visitation patterns of a user, one or more trust relationships indicating that the user trusts one or more entities;”
It may very well be that site visitation patterns and other user behaviors are what is meant by the “trust button” references.
The patent defines trusted entities as news sites, blogs, forums, and review sites, but not limited to those kinds of sites, it could be any other kind of website.
Trusted websites create references to other sites and in that reference they label those other sites as being relevant to a particular topic. That label could be an anchor text. But it could be something else.
The patent explicitly mentions anchor text only once:
“In some cases, an entity may simply create a link from its site to a particular item of web content (e.g., a document) and provide a label 107 as the anchor text of the link.”
Although it only explicitly mentions anchor text once, there are other passages where it anchor text is strongly implied, for example, the patent offers a general description of labels as describing or categorizing the content found on another site:
“…labels are words, phrases, markers or other indicia that have been associated with certain web content (pages, sites, documents, media, etc.) by others as descriptive or categorical identifiers.”
Trusted sites link out to web pages with labels and links. The combination of a label and a link is called an annotation.
This is how it’s described:
“An annotation 106 includes a label 107 and a URL pattern associated with the label; the URL pattern can be specific to an individual web page or to any portion of a web site or pages therein.”
Users can also search with “labels” in their queries by using a non-existent “label:” advanced search query. Those kinds of queries are then used to match the labels that a website page is associated with.
This is how it’s explained:
“For example, a query “cancer label:symptoms” includes the query term “cancel” and a query label “symptoms”, and thus is a request for documents relevant to cancer, and that have been labeled as relating to “symptoms.”
Labels such as these can be associated with documents from any entity, whether the entity created the document, or is a third party. The entity that has labeled a document has some degree of trust, as further described below.”
What is that label in the search query? It could simply be certain descriptive keywords, but there aren’t any clues to speculate further than that.
The patent puts it all together like this:
“Using the annotation information and trust information from the trust database 190, the search engine 180 determines a trust factor for each document.”
A user’s trust is in a website. That user-trusted website is not necessarily the one that’s ranked, it’s the website that’s linking/trusting another relevant web page. The web page that is ranked can be the one that the trusted site has labeled as relevant for a specific topic and it could be a web page in the trusted site itself. The purpose of the user signals is to provide a starting point, so to speak, from which to identify trustworthy sites.
Vertical Knowledge Sites, sites that users trust, can host the commentary of experts. The expert could be the publisher of the trusted site as well. Experts are important because links from expert sites are used as part of the ranking process.
Experts are defined as publishing a deep level of content on the topic:
“These and other vertical knowledge sites may also host the analysis and comments of experts or others with knowledge, expertise, or a point of view in particular fields, who again can comment on content found on the Internet.
For example, a website operated by a digital camera expert and devoted to digital cameras typically includes product reviews, guidance on how to purchase a digital camera, as well as links to camera manufacturer’s sites, new products announcements, technical articles, additional reviews, or other sources of content.
To assist the user, the expert may include comments on the linked content, such as labeling a particular technical article as “expert level,” or a particular review as “negative professional review,” or a new product announcement as ;new 10MP digital SLR’.”
Links and annotations from user-trusted expert sites are described as sources of trust information:
“For example, Expert may create an annotation 106 including the label 107 “Professional review” for a review 114 of Canon digital SLR camera on a web site “www.digitalcameraworld.com”, a label 107 of “Jazz music” for a CD 115 on the site “www.jazzworld.com”, a label 107 of “Classic Drama” for the movie 116 “North by Northwest” listed on website “www.movierental.com”, and a label 107 of “Symptoms” for a group of pages describing the symptoms of colon cancer on a website 117 “www.yourhealth.com”.
Note that labels 107 can also include numerical values (not shown), indicating a rating or degree of significance that the entity attaches to the labeled document.
Expert’s web site 105 can also include trust information. More specifically, Expert’s web site 105 can include a trust list 109 of entities whom Expert trusts. This list may be in the form of a list of entity names, the URLs of such entities’ web pages, or by other identifying information. Expert’s web site 105 may also include a vanity list 111 listing entities who trust Expert; again this may be in the form of a list of entity names, URLs, or other identifying information.”
The patent describes additional signals that can be used to signal (infer) trust. These are more traditional type signals like links, a list of trusted web pages (maybe a resources page?) and a list of sites that trust the website.
These are the inferred trust signals:
“(1) links from the user’s web page to web pages belonging to trusted entities;
(2) a trust list that identifies entities that the user trusts; or
(3) a vanity list which identifies users who trust the owner of the vanity page.”
Another kind of trust signal that can be inferred is from identifying sites that a user tends to visit.
The patent explains:
“The system can also examine web visitation patterns of the user and can infer from the web visitation patterns which entities the user trusts. For example, the system can infer that a particular user trusts a particular entity when the user visits the entity’s web page with a certain frequency.”
That’s a pretty big signal and I believe that it suggests that promotional activities that encourage potential site visitors to discover a site and then become loyal site visitors can be helpful. For example, that kind of signal can be tracked with branded search queries. It could be that Google is only looking at site visit information but I think that branded queries are an equally trustworthy signal, especially when those queries are accompanied by labels… ding, ding, ding!
The patent also lists some kind of out there examples of inferred trust like contact/chat list data. It doesn’t say social media, just contact/chat lists.
Another interesting feature of trust rank is that it can decay or increase over time.
The patent is straightforward about this part:
“Note that trust relationships can change. For example, the system can increase (or decrease) the strength of a trust relationship for a trusted entity. The search engine system 100 can also cause the strength of a trust relationship to decay over time if the trust relationship is not affirmed by the user, for example by visiting the entity’s web site and activating the trust button 112.”
Directly after the above paragraph is a section about enabling users to edit their trust relationships through a user interface. There has never been such a thing, just like the non-existent trust button.
This is possibly a stand-in for something else. Could this trusted sites dashboard be Chrome browser bookmarks or sites that are followed in Discover? This is a matter for speculation.
Here’s what the patent says:
“The search engine system 100 may also expose a user interface to the trust database 190 by which the user can edit the user trust relationships, including adding or removing trust relationships with selected entities.
The trust information in the trust database 190 is also periodically updated by crawling of web sites, including sites of entities with trust information (e.g., trust lists, vanity lists); trust ranks are recomputed based on the updated trust information.”
Google’s Search Result Ranking Based On Trust patent describes a way of leveraging user-behavior signals to understand which sites are trustworthy. The system then identifies sites that are trusted by the user-trusted sites and uses that information as a ranking signal. There is no actual trust rank metric, but there are ranking signals related to what users trust. Those signals can decay or increase based on factors like whether a user still visits those sites.
The larger takeaway is that this patent is an example of how Google is focused on user signals as a ranking source, so that they can feed that back into ranking sites that meet their needs. This means that instead of doing things because “this is what Google likes,” it’s better to go even deeper and do things because users like it. That will feed back to Google through these kinds of algorithms that measure user behavior patterns, something we all know Google uses.
Featured Image by Shutterstock/samsulalam